Some notes about Digital Ocean servers

I’ve been unhappy with my current website host. My old hosting company, Verio, sold part of its webhosting business (including my websites) to another company. Verio used to provide php by default. The new host does not. I decided to try out another cloud provider, Digital Ocean, to see what they can provide.

I spent a while reviewing their help pages to prepare myself for surprises. I spent enough time on their documents site that they gave me a $10 credit when I set up a new account. Eventually, I set up an account with them and received the credit. Click on the link above to get your own $10 credit. (I’ll get some credit, too.)

Setting up a droplet is just as described in their setup page. It’s almost exactly the same. Digital Ocean provides more machine size options ($320/month and above) for monster machines with dedicated CPUs and/or high RAM requirements. I don’t need that. I want a small Ubuntu machine that I can use to hold the domain names that I have and don’t use.

I like their suggestion to use public/private keys. I did not realize that each machine should use exactly one key pair. I had set up key pairs for Bitbucket, Github and now Digital Ocean. I was unable to log in automatically with the key pair until I replaced the public key with the default public key I created on my machines a long time ago. I still set up a password for the non-root account. I still have to key it in for ‘sudo’ stuff, which is annoying, but login works automatically and well.

I decided to follow their instructions for setting up a server firewall. I’m not familiar with IP tables and Digital Ocean recommends using ufw. I followed their instructions and discovered that new terminal windows were not logging in automatically with ssh. It kept timing out for some reason. I rolled back my ufw changes, but I still had trouble with logins. I sent in a help ticket and received some additional instructions that look like they work. They have so far, so that’s good.

I continued setting up the server. Apache installed without trouble, even though they recommend Nginx. (I don’t know Nginx yet.) I skipped MySQL because I did not need it for a placeholder / testing site. I followed their instructions for installing PHP and … discovered that Ubuntu 16.04 does not have default repositories for PHP5. It has repos for PHP7. I had to add an additional repo for the PHP 5 files. Not a big deal, once I knew what was going on. Finally, I have PHP5 installed on the placeholder site. For a placeholder site, I like it.

Resetting after El Cap install

I have some more free time, so I decided to reset my El Capitan system as a development machine again.

I bought my license for BBEdit 11.5, so I installed that. I’ve used Visual Studio Code at work. I like how the window can take over the entire screen, but I don’t like how only three tabs for files are allowed per window. If you need more files open, you need to open another window. It seems weird. I’ve used TextMate. Again, I like how tabs are used to show files instead of the list used by BBEdit. I’m used to BBEdit’s quirks, so I’ll go with that for the moment

I need to reload Apache. I use the standard setup:

sudo apachectl start

However, Apache would not start this time. Running

apachectl configtest

shows an error with something called LockFile.

After some searches with StackExchange, I find out that LockFile is an Apache directive active for Apache <= 2.2. El Capitan runs Apache 2.4.16. Comment out the section that includes the LockFile directive and restart Apache. Localhost is working again.

I’m using the instructions in this page to activate Apache and PHP. phptest.php works after restarting, but I have the PHP timezone problem again. I had to copy a php.ini file from the included php.ini.default, set the timezone to “America/Los_Angeles” and restart Apache. Problem solved. More stuff coming soon.

reloading files … not fun

My workstation has a weird short circuit somewhere that has to be tracked down at the help desk. While I wait, I’ve tried to install my work on the department laptop. It has not been easy.

Luckily I pushed copies of what I was working into repositories for off-site storage. Unfortunately, those repositories were slightly out of date. I did not lose anything from the database archives, from what I can tell. The CakePHP 3 repo is about two weeks out of date. I convinced myself it was OK, since I spent two weeks trying to solve a dead end regarding JQuery, AJAX and calls back to the original CakePHP action. It should be easy, but I have not been able to track down the answer yet.

I cloned the archive repository locally and reloaded that SQL into the local copy of MySQL. That went fine, with some minor issues. You can’t load a table with foreign keys until after those related tables are loaded first. (Of course.) If the file is too big to load through phpMyAdmin’s 2 MB file limit, zip it and try again. (That answer was staring at me all day long. Very annoying, once I found it.) Everything looks fine, so I move on to the next action.

Next, I needed to reload the CakePHP repository. Installing CakePHP is a little more involved than cloning a repository. For starters, the docs say I need Composer to install, which is not loaded on the laptop. I’m not sure why that would be needed for a repository clone, but why not? It can’t hurt, and I might need it, so I load Composer using homebrew. I found my old instructions for installing composer, but I forgot the final instruction, about ignore dependencies. I’m sure I thought it was obvious at the time, but it’s better if I remember putting it in. Finally, Composer is loaded, updated and ready to go.

Next, I remember that CakePHP needs specific PHP extensions to run properly (mbstring, openssl, and intl). The one I did not have was the intl extension. I checked my notes again and found a very good description of how to install the intl extension. That’s done, finally. I’m ready for CakePHP.

I clone the CakePHP repository. When I try to load it, it bombs. I get an error about permissions denied in the logs folder. I remember this error from another installation, so I’m confident I can track down the issue. I also notice that the vendors directory is empty. Now that I have the composer.json file, I update Composer and run it again. The vendors directory is back, but I still have the permissions problem.

While checking the vendors directory, I went to the config directory for some reason. I noticed that my config/app.php file is missing. That’s odd. The app.php file controls database access, so I’m surprised to see it’s missing. I finally get access to the Time Machine drive of the old machine and copy the latest version of that app.php file over. The permissions problem is not solved, so I decide to start from the basic installation with a test site described on the CakePHP web pages.

I stumble around, comparing user/group settings on folders between the fresh install and the Time Machine backup. Eventually, I get them set to something that looks like it works. However, the links to the CakePHP css pages are not working. I remember this again from a previous install. This has to do with apache and how it blocks access to .htaccess files (or something like that.)

I track down the section in the CakePHP documentation regarding URL rewriting, so I figure out how to set apache properly to get it to read CakePHP’s .htaccess file. While there, I find a related link that tells me exactly how to fix the logs and tmp directory. I’m almost ready, except for the part where my archives have disappeared. Somehow, they tables I loaded disappeared at some point. Very strange. This is important since I built a small website that uses CakePHP to display the data in the archive tables. Next stop. What happened to the archive tables?

quick note: Bash bug

Today’s news about a bash bug has sparked the internet equivalent of lots of people yelling about something they don’t really understand. I will admit that I don’t fully understand the bug, but I stumbled across a good explanation of the bash bug here.

UC San Diego computer security has also reviewed the infection vectors. They say that Apache modules (mod_php, mod_perl, mod_python) don’t appear to be vulnerable. However, the article linked above does say that library calls created by php functions may be vulnerable. I don’t know enough to understand how the module can be OK, but the system call going through the module is not OK. The key appears to be sending a hyperlink that opens a terminal window that then acts on the original bug. Seems like a lot of work, but it’s a hole. An easier possibility would be to attack apache systems that use CGI directly.

The installation of patches is recommended. When Apple and OS X gets patched is unknown. Red Hat rolled out an incomplete patch earlier today. It’s been reviewed and a better patch is expected soon.

.htaccess and Mavericks upgrade

After the Mavericks upgrade, I noticed in my Apache error logs that I was not seeing the blocked requests I expected to see from non-work IP addresses. I discovered that the Mavericks upgrade updated /etc/apache2/httpd.conf, which is no surprise. What I should have done was checked to see exactly what changes took place in httpd.conf.

It turns out that the AllowOverride directive in DocumentRoot was changed back to its original value in the upgrade, or ‘None’. When I first started using this box, I changed that and allowed .htaccess in the DocumentRoot to allow access.

In the new httpd.conf, I changed that directive back to “AllowOverride All” for DocumentRoot only and restarted Apache. (I always forget: > sudo apachectl restart). Apache is running again. I checked it against the local (beta) sites running on this box. A day later, I see the familiar line in the error log file: Access blocked.

Lion to Mavericks upgrade

I finally had enough time to install the Mavericks upgrade to my work machine. It’s a good thing, too, since I was strongly encouraged to make the migration as soon as possible.

I downloaded the installer from the Mac App Store and ran it after downloading. I immediately ran into a problem. The message read:

The OS X upgrade couldn’t be installed because the disk ‘Mac HD’ is damaged and can’t be repaired. After your computer restarts, backup your data, erase your disk and try installing again.” I thought it would be safe to start off of one my duplicated hard drives, so I switched over to that one.

After some discussion with campus tech support, they suggested I get an external Time Machine backup and be ready to erase the hard drive. The backup finished after work, so I decided to run the installer the next work day. While I waited for the backup, I did a little research and discovered that a “Recovery Partition” could be used to solve the hard drive issue.

The next work day, I did some quick work, then ran one more backup. I switched back to Mac HD, restarted and hit the key sequence to start in the recovery partition (Command-R). I did not see anything that looked like a recovery application running or a recovery partition during startup. I eventually logged into my account on the hard drive, then ran the Mavericks installer again. This time I noticed no error messages or warning. After about an hour, I was running in Mavericks (OS X 10.9.4).

There some things to fix after the restart. I found this page that described what I needed to do. The short version goes like this:

  • Apache had to be turned on again
  • PHP had to be reset (running 5.4.24, OS X is always behind)
  • MySQL had be reinstalled (5.6.19 this time)
  • The MySQL socket error is still around, so I fixed that
  • installed phpMyAdmin (4.2.5). It’s a lot easier now, but I still have to install the dumps from the last version. However, I forgot to record the passwords for all the accounts that use phpMyAdmin and MySQL. Finding them all was fun. Get that info before running the upgrade.

I still have to check on Python and my day-to-day apps. I lost Parallels, but I haven’t used it in a while, so that’s OK. Carbon Copy Cloner may not run in its current version, so I’ll have to upgrade ($40!).