Site SSL Security Check: Getting From F to A

This should be subtitled “Why keeping the OS up-to-date isn’t enough”. Or; “wake up! aieeee! the hackers are coming! run for your lives!”.

Actually, nothing interesting has happened, I just ran some SSL tests1 on my home mailserver while getting ready to swap the SSL certificates from rapidSSL to Lets Encrypt. This is necessary due to various browsers ceasing to trust rapidSSL-issued certs soon2 so my cert will start showing up as untrusted. I’d rather avoid that so a switch to Lets Encrypt is a no-brainer, especially with the automated renewal process being so smooth.

However, the “F” I was seeing in the test was nothing to do with the certificate issuer, it was simply due to me not keeping the Apache SSL configuration up to date even though I had kept the server OS properly updated3. Actually, I don’t think I’ve touched the SSL configuration since 2014 at all, and I think I just accepted the default install then, so null points for security-awareness ๐Ÿ™

So, what was in the ssl config? This was the default:

#   SSL Protocol support:
# List the enable protocol levels with which clients will be able to
# connect.  Disable SSLv2 access by default:
SSLProtocol all -SSLv2

#   SSL Cipher Suite:
# List the ciphers that the client is permitted to negotiate.
# See the mod_ssl documentation for a complete list.
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM

Oops, according to the “F” report I’ve got a load of insecure cyphers enabled by default – hardly surprising since they were old settings and there has been a load of cypher-cracking since then.

So – how to fix it? Simple, let’s cheat and look at one of my recent Apache VPS installs that I know tests as “A” on the defaults. Like so:

SSLProtocol all -SSLv2 -SSLv3

SSLHonorCipherOrder on

#(line breaks added for clarity)
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM 
EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 
EECDH+aRSA+SHA256 EECDH !aNULL !eNULL !LOW !3DES !MD5 !EXP 
!PSK !SRP !DSS !EDH !RC4"

The compromised SSLv3 is disabled and the Cypher suites are restricted to the strongest ones only and in order of strength.

I edited my SSL config to match this and also fixed the certificate chain problem where I didn’t update it properly last time I renewed my cert. Then I re-ran the tests…

Et Voila!

Fixed ๐Ÿ™‚ It really was that easy to be insecure and also that easy to fix.

I’ll take that as a wake-up call to check SSL settings properly and keep an eye on the security recommendations as they change in future.

 

Prevent Non-Authenticated Access to WordPress REST API

I only recently became aware of the shenanigans from February with hackers attacking vulnerabilities in the WordPress Json REST API to deface sites. I wasn’t affected by this; and in actuality I hadn’t taken notice of the Api at all since I haven’t done anything yet that needs it.

Now I have taken notice and IMO there is a fundamental issue with it as it allows non-authenticated access to quite a lot of blog data in a way that is easy to probe very quickly with automated hacking tools4. ย Now this is kinda- not a big deal as the accessible data is supposedly public anyway, but after having a quick poke around the api end points I found a glaring security hole that I really don’t want to expose.

Navigating to the “wp-json/wp/v2/users” endpoint lists users that also authored pages, not just the users that have visible published posts. That includes my admin user that created the “About” and “Contact” pages and I really don’t want to expose any clues about the site administrator5.

So, I’m not happy at all having the api exposed, but fortunately the fix is very easy and straight out of the documentation. Adding the following code6 to the functions.php or a site-specific plugin restricts all api access to authenticated users:

add_filter( 'rest_authentication_errors', function( $result ) {
    if ( ! empty( $result ) ) {
        return $result;
    }
    if ( ! is_user_logged_in() ) {
        return new WP_Error( 'rest_not_logged_in', 'You are not currently logged in.', 
        array( 'status' => 401 ) );
    }
    return $result;
});

This works fine and keeps the api function intact, unlike other solutions that block the “users” endpoint completely7.

Another interesting thing I found is that requesting a list of categories in the api using “wp-json/wp/v2/categories” lists all categories, not just ones used in published posts – thereby revealing non-publicly available data – FAIL! Who knows what other holes are there in the api8???

Anyone else got misgivings about the potential security of the WordPress REST API?

[PS: For people not comfortable with coding there is a plugin that I believe uses the same approach as I do: Disable REST API by Dave McHale]

 

Fixing Up the Winter Bike: All the Broken Things

Like my bike, but mine isn’t as clean or blue and I’m not the rider in the pic… I’d not use such a stupid gear either.

Every cyclist worth their salt has a Winter Bike1. This is the (slightly) unloved machine that gets all the Winter training miles in through grim weather, mud, salt and general unpleasantness. It’s only reward is to be cast aside as soon as the nice weather arrives and the Summer Bike comes out of the shed to rule the roost.

However, the sins of the cyclist catch up eventually and my bike was in a terrible state due to last Winter’s mud-bath and general neglect. Optimistically, I thought that it just needed a deep clean, a new chain and a few squirts of oil & grease. If only…

After a lengthy session with water, detergent, scrubbing brush and solvents it was clean, but wasn’t looking so good mechanically. The bottom bracket was shot, the front and rear brake pads were down to the limit, the rear mudguard was loose and rattling2 and the cassette looked a bit “hooky”3. So, a bit more work than expected.

All the broken things…

Time to take things off and pile them in a heap of shame.

The chainset is a marginal case – it’s an original Shimano 105 Octalink V14 with 50/39 rings.ย  It has lasted pretty well5, but there are 2 snags with it nowadays since it is long-obsolete: a) The new bottom brackets are quite expensive6, b) Octalink V1 is pants as the splines aren’t deep enough (or tapered) so un-fixable play develops between the bb and crank.

All-in-all, I decided to avoid buying a new bb and instead to replace the lot with another chainset + bb that I happened to have in my parts box.

Right back at the start of my cycling obsession hobby I built my first road bike with a compact chainset7. I quickly discovered that a compact chainset is hopeless for flat areas since the small ring is just too small, so that chainset has been languishing in my parts box ever since. It’s in good nick, I have a bottom bracket to fit and also a 36T inner ring that I bought on a whim with a vague plan to go cyclocrossing8 so job’s a good ‘un. The gearing may be a bit odd, but this will also be helped by the fact that I had a new 12-21 9-speed cassette in my parts box to fit; the sizes will probably cancel out or something.

Next issue was the brake pads. I use Koolstop Salmons on my Winter bikes since they are the best of a bad lot in the wet. Swapping new ones into the existing shoes would have been an easy job if it hadn’t been a Winter bike where everything corrodes and the tiny pad retaining screws hadn’t seized in9. So I had to buy a new couple of sets of Ultegra shoes+pads10 and replace the pads with the Koolstops.

Once the brakes were sorted it was just routine re-assembly of the rest with antiseize copper grease and a good oiling. Putting things together when they are clean and new is always easy and fun ๐Ÿ™‚ As expected, the new chain skipped on the old cassette. I tried it to see if I could get away with it, but my initial eyeball estimate of wear was accurate & it was too far gone.

The replaced chainset on the bike & ready to go.

The (moderately) ingenious mudguard repair is not quite visible in this pic. I had some thick plastic card handy that was display packaging for a Lezyne minipump, so I made a curved section that fitted under the mudguard, a flat section that went on top under the fitting clamp and then drilled a couple of holes through the mudguard to clamp it all together with some button-head bolts. Worked a treat and I reckon I can eke the mudguard out for at least another year before having to replace it completely11.

The resulting fixed-up bike is a lot nicer to ride again with the annoying rattly squeaking from the rear mudguard gone. The gearing is very slightly lower, but I think it is actually a touch better than the previous ratios for the area that I mostly ride this bike in. A success!

Next up in Bike maintenance world is getting the Summer bike back on the road for the start of May. That should be a much quicker process as I think it just needs a new chain and a bit of a clean… I hope!

 

Upload to Google Cloud Storage from a Bash Script

This is the buy one, get one free bonus prize from the AWS S3 upload script. Google Cloud has a (mostly) AWS-compatible mode as well as the OAuth 2.0 mode that is the native API. Connecting to OAuth is pretty involved12 and I’ve not seen it done directly from a shell script yet. Google do provide some Python tools for command line access13, but they need Python 2.714 and are both dog-slow and clunky.

You can’t get away from the command line tools totally with Google though because they haven’t really finished the interface to their cloud services15 and there are quite a few things that can’t be done at all with the web interface e.g. setting lifecyles on storage buckets16.

There is however, a useful consequence of Google Cloud being like AWS’s idiot step-child, the permissions set-up in AWS-compatible mode is MUCH easier than setting up permissions on AWS S317. This is all you have to do:

Google Cloud Storage Interoperability SettingsJust turn create a storage bucket, turn on interoperability mode for the project, copy down the key and secret and voila! The default permissions are those of the project owner, so read+write to the bucket just works.

The picture shows the view after interoperability mode is enabled. The key+secret can be deleted at any time, and / or further key+secrets created. Very easy.

So,ย  here is the script.

#GS3 parameters
GS3KEY="my-key-here"
GS3SECRET="my-secret-here"
GS3BUCKET="bucket-name"
GS3STORAGETYPE="STANDARD" #leave as "standard", defaults to however bucket is set up

function putGoogleS3
{
  local path=$1
  local file=$2
  local aws_path=$3
  local bucket="${GS3BUCKET}"
  local date=$(date +"%a, %d %b %Y %T %z")
  local acl="x-amz-acl:private"
  local content_type="application/octet-stream"
  local storage_type="x-amz-storage-class:${GS3STORAGETYPE}"
  local string="PUT\n\n$content_type\n$date\n$acl\n$storage_type\n/$bucket$aws_path$file"
  local signature=$(echo -en "${string}" | openssl sha1 -hmac "${GS3SECRET}" -binary | base64)

  curl --fail -s -X PUT -T "$path/$file" \
    -H "Host: $bucket.storage.googleapis.com" \
    -H "Date: $date" \
    -H "Content-Type: $content_type" \
    -H "$storage_type" \
    -H "$acl" \
    -H "Authorization: AWS ${GS3KEY}:$signature" \
    "http://$bucket.storage.googleapis.com$aws_path$file"
}

It is very similar to my earlier S3 post and usage is exactly the same. The upload speed seems very similar to S3, which is not that surprising as I’d expect their network infrastructure to be a similar scale and capability.

As to which of the two services is better, I haven’t got a clue. I think for a large-scale enterprise user AWS would win every time on the superiority of the tools, stability of platform and the fact that they offer proper guarantees of service etc. Google is much more of a beta product, the docs are full of warnings that the interface could change at any time and there don’t appear to be any warranties.

For a non-pro user the Google cloud storage is easier to use, in AWS-compatibility mode at least, so I think it’s a good choice for backup storage in less mission-critical applications. I’m using it for one of my applications and I haven’t had any issues yet.

 

Postscript:

I was finished, but I thought I’d just add a quick note on setting bucket lifecycle. First a json config file has to be created with the lifecycle description e.g. this is a file I call lifecycle_del_60d.json:

{
  "rule":
  [
    {
      "action": {"type": "Delete"},
      "condition": {"age": 60}
    }
  ]
}

Then the gsutil command needs to be run to set lifecycle on a bucket: gsutil lifecycle set lifecycle_del_60d.json gs://my-bucket-name

…and that is that, job done. Files older than 60 days in the bucket will be automatically deleted.

 

How to Make the WordPress Post Editor Default to Opening Links in a New Tab

This one definitely fits into the “Minor annoyances have a ridiculous amount of effort expended on them” category. See the pic for the minor annoyance; I had to click the “open link in a new tab” checkbox every time I added an external link in a post. One second wasted? Two tops on a slow day pre-coffee.

I should have just shrugged my shoulders and moved on, but that is not how things work in geek-land ๐Ÿ˜‰

So; I went searching. I was a bit surprised to discover that there wasn’t a way of setting this default, it seemed obvious. Maybe there was a plugin? Nope18.

Maybe someone else had asked the question? Yes they had and there was a solution … which was wrong as it didn’t work19.

OK, I can see where this is going. I’m quite good at Javascript, so how hard can it be?20

Turns out the solution is harder than you may think and it requires supplying a setup function for TinyMCE21. I found a post on a related topic here which also doesn’t work (it’s for an older version of TinyMCE), but gave me enough clues that I could put the rest of the pieces together myself after reading the V4 docs.

The code snippet below is the solution; I stuck it in a site-specific plugin I already had, but it could go in functions.php or wherever:

add_filter('tiny_mce_before_init', function($initArray){
    //add a setup fn
    $initArray['setup'] = <<<JS
function(editor) {
    //catch the ExecCommand event. The tinyMce mceInsertLink command then
    //triggers a custom WP_Link command
    editor.on('ExecCommand', function (e) {
      //console.debug('Command: ', e);
      if(e.command === 'WP_Link'){
          //release to UI so link value is populated
          setTimeout(function(){
            var linkSel = document.querySelector('.wp-link-input input');
            
            if(linkSel.value === ''){
                //no link so set the "Open link in a new tab" checkbox
                //to force the default as checked
                var linkTargetCheckSel = document.querySelector('#wp-link-target');
                linkTargetCheckSel.checked = true;
            }
          }, 0);
      };
    });

}
JS;
    $initArray['setup'] = trim($initArray['setup']); //prevent leading whitespace before fn
    return $initArray;
});

The trick is to catch the WP_Link event that the editor emits, look at the link value to see if it is blank and if it is, set the “open link in new tab” checkbox so that becomes the default.

Works just fine and would make the lamest plugin ever ๐Ÿ™‚

[Update Dec 17 – Bah! one of the WP updates since I created this solution has broken it. I’ll research what happened and try and fix it again when I can be bothered. ]

Upload to AWS S3 from a Bash Script

The Big River’s cloud storage is very good and cheap too, so it is an ideal place to store backups of various sorts. I wanted to do this upload from a Bash script in as simple a way as possible.

I have previous with the API having used it from PHP, both using the AWS SDK and also rolling my own simplified upload function. That wasn’t exactly easy to do22, so I didn’t want to go there again.

A bit of Googling came up with this Github gist by Chris Parsons which is almost exactly what I needed. I just had to parameterise it a bit more and add the facility to specify storage class and AWS region 23.

Here is the finished result:

#S3 parameters
S3KEY="my-key"
S3SECRET="my-secret"
S3BUCKET="my-bucket"
S3STORAGETYPE="STANDARD" #REDUCED_REDUNDANCY or STANDARD etc.
AWSREGION="s3-xxxxxx"

function putS3
{
  path=$1
  file=$2
  aws_path=$3
  bucket="${S3BUCKET}"
  date=$(date +"%a, %d %b %Y %T %z")
  acl="x-amz-acl:private"
  content_type="application/octet-stream"
  storage_type="x-amz-storage-class:${S3STORAGETYPE}"
  string="PUT\n\n$content_type\n$date\n$acl\n$storage_type\n/$bucket$aws_path$file"
  signature=$(echo -en "${string}" | openssl sha1 -hmac "${S3SECRET}" -binary | base64)
  curl -s -X PUT -T "$path/$file" \
    -H "Host: $bucket.${AWSREGION}.amazonaws.com" \
    -H "Date: $date" \
    -H "Content-Type: $content_type" \
    -H "$storage_type" \
    -H "$acl" \
    -H "Authorization: AWS ${S3KEY}:$signature" \
    "https://$bucket.${AWSREGION}.amazonaws.com$aws_path$file"
}

This works a treat on the small-to-medium sized compressed files that I use it for.

The only real snag with this function is that it doesn’t set an exit code, so it’s not easy to exit a script if the upload fails for some reason. I got round it by grepping the output of the curl and looking for the text “error”. This finds all the cases I tried apart from incorrect region.

The last line is changed to the following:

"https://$bucket.${AWSREGION}.amazonaws.com$aws_path$file" --stderr - | grep -i "error"
 #the --stderr bit onward is so that a return code is set if curl returns an 
 #error message from AWS
 #note that the value of the code is inverted from usual error interpretation: 
 #0=found "error", <>0=not found "error"

I wasn’t entirely happy with this rather crude workround, but it did achieve the desired result.

Postscript:

I later found and tested the –fail option for Curl that sets the exit code properly. That means the grep hack isn’t needed to allow error detection and the normal testing of the $? bash variable will work.

(I also tidied up and made the function variables proper locals – see the Google cloud version for example Google cloud version)

Fun with Nginx: Getting WordPress to work in a Sub-Directory

Well that was fun. After a marathon google session and an almost infinite number of reloads of Nginx I got wordpress with pretty permalinks working in an aliased sub-directory on my windows test laptop.

(Clue – almost every post explaining how to do this is incorrect. An exercise for the interested reader is to find the one that worked …)

This is the simplified excerpt from a server{} directive. [The wordpress root is mapped from somewhere else into the /wordpress sub-directory using the alias directive.]

location @wp {
  rewrite ^/wordpress(.*) /wordpress/index.php?$1;
}

location /wordpress {
    alias "c:/webserver/www/websites/wpnm"; #where wp actually is
    try_files $uri $uri/ @wp;

    #other config stuff here....
    
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_read_timeout 300s;
        fastcgi_pass 127.0.0.1:9123;
        fastcgi_param  SCRIPT_FILENAME $request_filename;

        include fastcgi_params;
    }
}

The key bit is the try_files directive looking for a real physical file and then when it isn’t found the @wp rewrite directive grabs the pretty permalink and stuffs it into a query string tacked on the index.php. This works with all the permalink variations that wp offers.

If WordPress is simply running in the root directory then this is easy-peasy and the re-write isn’t needed – simply stick /index.php?$args as the last term of try_files instead of @wp. That config can easily be found on line

The best resource I found for getting to grips with Nginx’s config was a tutorial on Linode’s website: https://www.linode.com/docs/websites/nginx/how-to-configure-nginx. This explains how the order of processing of the location directives works; essential knowledge to do anything above a very basic level.

Creating these configs isn’t that easy due to the limited debug tools. It’s not easy to see why things don’t work and the logs don’t help.

The hilariously uninformative error messages from the fastcgi module are typical – almost any config mistake produces a blank page with “No input file specified.”. Niiiice ๐Ÿ™‚ ย Good luck finding out just what the incorrect parameter was ๐Ÿ˜‰

The best debug suggestions I have found are in this blog post: https://blog.martinfjordvald.com/2011/01/no-input-file-specified-with-php-and-nginx/. I had a bit of fun since I started withย $document_root$fastcgi_script_name as the SCRIPT_FILENAME as the Nginx docs suggest, but that doesn’t work when using aliases.

Of course, that little gem is not well documented…..