Message Notifications using PHP and Pushbullet

I’ve been amusing myself following through an excellent series on self-hosting WordPress by Ashley Rich. Mostly so I can experiment with Nginx, pinch the best ideas and use them myself πŸ™‚

Everything had gone swimmingly on a test VPS running the latest LTS version of Ubuntu, but when I tried to implement some of the ideas on the shared hosting1 where this blog runs I ran into a bit of trouble. I was trying to get WordPress installation checksum verification2 and push messaging via Pushbullet working, but the https connect to the Pushbullet API would always fail. After some experimentation it turned out that the command line version of Curl was ancient3 and it wasn’t going to work. I already knew I was wasting my time asking the hosting company to upgrade the version; I had tried that before with another shell program that was obsolete and they (politely) declined4. So, time to get programming!

I already know that the python version on the hosting is ancient and doesn’t have SSL support at all5, ditto Perl, so I am going to have to use PHP. I won’t have a problem with that as there are versions up to 7.1 on this host and the SSL libraries are properly current too. Command-line PHP isn’t great, but it will work just fine for this application.

Here is the resulting simple program to send a notification using the Pushbullet API. This is called from a Bash script if the WordPress verification checks fail; the resulting warning message turns up on all my registered devices.

* PHP program to send messages to pushbullet.
* Usage: php message-pushbullet.php creds='...' title='...' body='...' [quiet] [headers]
error_reporting (E_ALL);

$url = "";
$type = "note";

if(php_sapi_name() == 'cli'){
    //convert cli params to GET params (chop off name of program which is 1st)
    parse_str(implode('&', array_slice($argv, 1)), $_GET);
//process parameters
$credentials = isset($_GET['creds']) ? trim($_GET['creds']) : "";
$title = isset($_GET['title']) ? trim($_GET['title']) : "";
$body = isset($_GET['body']) ? trim($_GET['body']) : "";
$quiet = isset($_GET['quiet']) ? true : false; //silent mode
$headers = isset($_GET['headers']) ? true : false; //show headers for debugging

if(!$credentials || !$title || !$body){
    echo "Usage: message-pushbullet creds='...' title='...' body='...' [quiet] [headers]\n";

$post_fields = json_encode(compact('type', 'title', 'body'));

$ch = curl_init();

$options = [
    CURLOPT_URL => $url,
    CURLOPT_HTTPHEADER => ['Access-Token: ' . $credentials, 
                            'Content-Type: application/json',
                            'Content-Length: ' . strlen($post_fields)],
    CURLOPT_POST => true,
    CURLOPT_POSTFIELDS => $post_fields,
    CURLOPT_HEADER => $headers,
curl_setopt_array($ch, $options);

$result = curl_exec($ch);
$status = curl_getinfo($ch, CURLINFO_HTTP_CODE);

    echo $result;
//check ok
if($status == 200){
    exit (0);

/* end */

Command-line parameter handling isn’t great as standard, so I cheat and turn them into GET parameters6. This handily means the program would work if used as a web script too with no further effort.

Oddly the API docs specify the parameters as JSON, but the version Ashley Rich created uses standard POST-style variables set using the CURL -d option and this works too. The Pushbullet API talks about maintaining backwards-compatibility to earlier versions, so I suspect there was a pre-JSON version at one time.

Most of the time was spent sorting out the Curl parameters needed πŸ™‚


Simple Python SMTP mail script – Part 2

Part 1 explained just why I wanted to send a SMTP email from the command line and explained some of the constraints I was under; this is how I got there.

I started out by not really being that familiar with Python, but I will always contend that knowing how to program is independent of the choice of language, so I wasn’t worried.

Fortunately (?), the very old version of Python on my shared hosting meant that I didn’t need to concern myself with the Python 2 vs Python 3 debate7, or which library to choose for some functions (no choice) so I could just get on with it.

Since I wanted to make this a general-purpose tool, I spent some extra time getting SSL on ports 465 and 587 working as well as non-encrypted on port 25. My laptop has the latest version of Python 2.7, so I could test these even if I couldn’t use them on TSOHost.

Here is the script:

#!/usr/bin/env python
# Simple smtp mailer program. Designed to work on old versions of Python (2.4).
# Has lots of command line parameters, but intended to run from shell scripts
# so doesn't matter. Tested on ports 25, 465 (encrypted) and 587 (encrypted).
# Only supports plain text auth via username & password.
# The encrypted modes require 2.6 or later as smtplib.SMTP_SSL is not present
# in earlier versions.
import smtplib
import datetime, sys, getopt, re
#import email.utils not on tsohost
#from email.mime.text import MIMEText not on tsohost
#SMTP_SSL not in tohost so port 465 and 587 won't work

port = ''
host = ''
mfrom = ''
mto = ''
msubj = ''
username = ''
password = ''
force_encrypt = False
quiet = False

def usage(progName = ''):
    print("Usage: %s -p port -o host -f mailFrom -t mailTo -s subject [opts] 'messageBody'" \
          % progName)
    print("Other options:")
    print("-u username -w password : Where login is required")
    print("-e : Force encryption - prevent plaintext username/password if not encypted")
    print("-q : Quiet")
    print("-h : Help")
    print("messageBody will be taken from stdin if parameter not present")
#get command line options    
    myopts, args = getopt.getopt(sys.argv[1:],"p:o:f:t:s:u:w:eqh")
except getopt.error,e:
for o, a in myopts:
    if o == '-p':
        port = a
    elif o == '-o':
        host = a
    elif o == '-f':
        mfrom = a
    elif o == '-t':
        mto = a
    elif o == '-s':
        msubj = a
    elif o == '-u':
        username = a
    elif o == '-w':
        password = a
    elif o == '-e':
        force_encrypt = True
    elif o == '-q':
        quiet = True
    elif o == '-h':
#check for minimal required args
if(port == '' or host == '' or mfrom == '' or mto == '' or msubj == ''):
    print("Missing args:")
#read from stdin for message body if no further args
if(len(args) == 0):
    mbody =
    mbody = args[0]

#generate correct mail date format    
date ="%a, %d %b %Y %H:%M:%S%z")
#generate message header and body
msg = "From: %s\r\nTo: %s\r\nSubject: %s\r\nDate: %s\r\n\r\n%s" \
       % (mfrom, mto, msubj, date, mbody)

#set to 1 for lots of debug messages
debuglevel = 0
#indicate encrypted
encrypted = False

    if(port == '465'):
        smtp = smtplib.SMTP_SSL(host, port)
        encrypted = True
        smtp = smtplib.SMTP(host, port)
    if('^STARTTLS\b', smtp.ehlo_resp, re.M | re.I)):
        #print("STARTTLS accepted")
        encrypted = True
    #see if login required
    #(order of login & plain not specified and there can be other options)
    if('^AUTH\b.*(?:LOGIN\b.*PLAIN\b|PLAIN\b.*LOGIN\b)', \
                 smtp.ehlo_resp, re.M | re.I)):
        #print("Plain login accepted")
        if(force_encrypt and not encrypted):
            print("Error: Plain text login over non-encrypted connection not allowed.")
            smtp.login(username, password)
    #send the mail!
    smtp.sendmail(mfrom, mto, msg)
    if(not quiet):
        print("Mail sent successfully.")
except Exception, e:
    print("Error: unable to send mail.")

That’s all there is to it. The response from smtp.ehlo() is tested to see if STARTTLS is supported; if it is then encryption is kicked in before doing a plain text login provided AUTH PLAIN LOGIN is supported. If the port is 465 then the sequence is started in SSL mode instead by using smtplib.SMTP_SSL.

Usage looks like:

./ -p 25 -o \
-f "A User <>" \
-t "" \
-s "Py Test Mail" \
"Test Message Here..."

Pretty simple really and I was pleased with how well the result worked. Python didn’t take much brainpower to get the basics going, at least at the noddy level needed to make something like this work.

It helped to be familiar with the workings of mailservers though. This is the legacy of getting my own home mailserver working using Dovecot & Postfix and having to get relaying through a port 465 tunnel to Virgin Media’s servers working when Postfix doesn’t support port 4658.

Simple Python SMTP mail script – Part 1

TSOHostI host this blog via TSOHost in the UK. On the surface they look like just another low-cost hosting provider, but under the covers they are actually quite programmer-friendly. My cheapo hosting plan has SSH access, cron jobs, up-to-date PHP7 and Mysql and surprisingly good tech support. All good so far and way in excess of the average WordPress user’s needs.

However, I’m not an average (or sensible) WordPress user, so when I started actually writing blog posts I also started thinking about off-site backups and it was far too easy and boring just to install a backup plugin and call it done. As a minor part of my day job I am involved in using Amazon web services, mostly S3 for cloud storage so I started musing about sticking my blog backups in a S3 bucket.

I already had a bash script to create a db backup, tar the wp-content dir, gzip it all up and send it to S3, so stick it on a cron job and pretty much job done eh? … Not quite; I like to get a daily email reminder that the backup ran and some info about the result.

…and that is when things started to unravel…

Shared hosting is cheap, but it comes with quite a few limitations compared to non-shared alternatives9. The main problem is that shared hosting is a magnet for every spammer and scammer around, so the hosting providers have to severely limit access to normal utilities like command-line mail and sendmail. Even when these utilities are there and work, the outgoing mail host is probably on every blacklist around so your mail is going nowhere. This is true for TSOHost too; I couldn’t use any of the easy ways of sending mail from my backup script.

However, all is not lost, they provide an internal SMTP server inside their cloud server farm that can send mails and I already know that works fine as I have used it from one of my PHP applications. Problem solved?

Not quite. Another issue with shared hosting appears – a lot of expected utilities are missing from the command line (ssmtp? – nope) and what is there is old. Like – really old10. They can’t be updated and nothing can be installed except user-land programs.

So how about doing something is a scripting language? TSOHost runs the latest PHP and I could pull down PHPMailer and whip a script up pretty quickly; I’m pretty good with PHP, so that would be easy. It just feels messy though – I started with a single bash script and I’d end up bolting another script and external libraries on the side just to send an email.

How about Perl? Version 5.8.811 is installed and it has the Net::SMTP module. It doesn’t have the Net::SMTP::SSL module though, but that is ok as the TSOHost internal SMTP server doesn’t need encrypted connections. Yes?

No. I don’t really know Perl beyond doing superficial one-liners in shell scripts; plus there is something …painful about Perl12. I’ll consider that as a last resort.

How about Python? Version 2.4.313 is there, the smtplib module loads. Again, SSL won’t work as that didn’t come in until v2.6, but not a problem here. Yes?

Yes! I don’t know Python well either, but that’s what Skynet-Beta Google is for eh?

To be continued…


WordPress Plugin WP-Mail-SMTP Self-signed Certificate Patch

I am using an excellent plugin by Callum Macdonald to send mail from my WordPress blog. This works absolutely fine with the various mail-servers that I have pointed it at, but embarrassingly not with my own home mail-server14.

Now, this isn’t exactly a problem since I don’t need to send mail via my own home mail-server from my blog, but as with all things geeky I got curious why it wouldn’t work. I know the mail-server works fine15 and I can connect and send /receive from all my computers and devices, so I was a bit puzzled. What was especially odd is that it worked from WAMP on my laptop, but not from a test site in a Vagrant virtual machine on the same laptop. So I went Googling and found this. Doh! My WAMP installation is still on PHP 5.516, but the Vagrant instance is on 7.1 – and I still have a self-signed certificate on my mail-server.

So; just because I could17, I created a very simple WordPress plugin to apply a filter to WP Mail SMTP to add the extra parameters for self-signed certificate connections. The code looks like this:

add_filter( 'wp_mail_smtp_custom_options' , function( $phpmailer ){

    $phpmailer->SMTPOptions = array(
        'ssl' => array(
            'verify_peer' => false,
            'verify_peer_name' => false,
            'allow_self_signed' => true
    return $phpmailer;

Couldn’t be simpler eh? This can be stuck in functions.php or in a plugin like I did18.

My plugin can be downloaded here: WP-Mail-SMTP Self-signed Certificate Patch Plugin.

Now; just a work of caution – you really shouldn’t do this. You should replace your self-signed certificate with a real one from Let’s Encrypt or a commercial provider. There’s not much excuse for using self-signed certificates now that it is so easy and cheap / free to get a real one (except for testing).

I had even less excuse as I already have a Let’s Encrypt certificate on Apache running on the same physical server as the mail-server19. After making and testing this plugin I actually fixed the certificate problem properly and I can confirm that exactly the same certificates that work for Apache (and Nginx) also work with Postfix. The only remaining issue that I need to sort is to modify the routine that re-starts Apache on certificate renewal to also restart Postfix.

A few minutes work and I’ve got until June to do it πŸ˜‰



Site SSL Security Check: Getting From F to A

This should be subtitled “Why keeping the OS up-to-date isn’t enough”. Or; “wake up! aieeee! the hackers are coming! run for your lives!”.

Actually, nothing interesting has happened, I just ran some SSL tests1 on my home mailserver while getting ready to swap the SSL certificates from rapidSSL to Lets Encrypt. This is necessary due to various browsers ceasing to trust rapidSSL-issued certs soon2 so my cert will start showing up as untrusted. I’d rather avoid that so a switch to Lets Encrypt is a no-brainer, especially with the automated renewal process being so smooth.

However, the “F” I was seeing in the test was nothing to do with the certificate issuer, it was simply due to me not keeping the Apache SSL configuration up to date even though I had kept the server OS properly updated3. Actually, I don’t think I’ve touched the SSL configuration since 2014 at all, and I think I just accepted the default install then, so null points for security-awareness πŸ™

So, what was in the ssl config? This was the default:

#   SSL Protocol support:
# List the enable protocol levels with which clients will be able to
# connect.  Disable SSLv2 access by default:
SSLProtocol all -SSLv2

#   SSL Cipher Suite:
# List the ciphers that the client is permitted to negotiate.
# See the mod_ssl documentation for a complete list.

Oops, according to the “F” report I’ve got a load of insecure cyphers enabled by default – hardly surprising since they were old settings and there has been a load of cypher-cracking since then.

So – how to fix it? Simple, let’s cheat and look at one of my recent Apache VPS installs that I know tests as “A” on the defaults. Like so:

SSLProtocol all -SSLv2 -SSLv3

SSLHonorCipherOrder on

#(line breaks added for clarity)

The compromised SSLv3 is disabled and the Cypher suites are restricted to the strongest ones only and in order of strength.

I edited my SSL config to match this and also fixed the certificate chain problem where I didn’t update it properly last time I renewed my cert. Then I re-ran the tests…

Et Voila!

Fixed πŸ™‚ It really was that easy to be insecure and also that easy to fix.

I’ll take that as a wake-up call to check SSL settings properly and keep an eye on the security recommendations as they change in future.


Prevent Non-Authenticated Access to WordPress REST API

I only recently became aware of the shenanigans from February with hackers attacking vulnerabilities in the WordPress Json REST API to deface sites. I wasn’t affected by this; and in actuality I hadn’t taken notice of the Api at all since I haven’t done anything yet that needs it.

Now I have taken notice and IMO there is a fundamental issue with it as it allows non-authenticated access to quite a lot of blog data in a way that is easy to probe very quickly with automated hacking tools4. Β Now this is kinda- not a big deal as the accessible data is supposedly public anyway, but after having a quick poke around the api end points I found a glaring security hole that I really don’t want to expose.

Navigating to the “wp-json/wp/v2/users” endpoint lists users that also authored pages, not just the users that have visible published posts. That includes my admin user that created the “About” and “Contact” pages and I really don’t want to expose any clues about the site administrator5.

So, I’m not happy at all having the api exposed, but fortunately the fix is very easy and straight out of the documentation. Adding the following code6 to the functions.php or a site-specific plugin restricts all api access to authenticated users:

add_filter( 'rest_authentication_errors', function( $result ) {
    if ( ! empty( $result ) ) {
        return $result;
    if ( ! is_user_logged_in() ) {
        return new WP_Error( 'rest_not_logged_in', 'You are not currently logged in.', 
        array( 'status' => 401 ) );
    return $result;

This works fine and keeps the api function intact, unlike other solutions that block the “users” endpoint completely7.

Another interesting thing I found is that requesting a list of categories in the api using “wp-json/wp/v2/categories” lists all categories, not just ones used in published posts – thereby revealing non-publicly available data – FAIL! Who knows what other holes are there in the api8???

Anyone else got misgivings about the potential security of the WordPress REST API?

[PS: For people not comfortable with coding there is a plugin that I believe uses the same approach as I do: Disable REST API by Dave McHale]


Upload to Google Cloud Storage from a Bash Script

This is the buy one, get one free bonus prize from the AWS S3 upload script. Google Cloud has a (mostly) AWS-compatible mode as well as the OAuth 2.0 mode that is the native API. Connecting to OAuth is pretty involved9 and I’ve not seen it done directly from a shell script yet. Google do provide some Python tools for command line access10, but they need Python 2.711 and are both dog-slow and clunky.

You can’t get away from the command line tools totally with Google though because they haven’t really finished the interface to their cloud services12 and there are quite a few things that can’t be done at all with the web interface e.g. setting lifecyles on storage buckets13.

There is however, a useful consequence of Google Cloud being like AWS’s idiot step-child, the permissions set-up in AWS-compatible mode is MUCH easier than setting up permissions on AWS S314. This is all you have to do:

Google Cloud Storage Interoperability SettingsJust turn create a storage bucket, turn on interoperability mode for the project, copy down the key and secret and voila! The default permissions are those of the project owner, so read+write to the bucket just works.

The picture shows the view after interoperability mode is enabled. The key+secret can be deleted at any time, and / or further key+secrets created. Very easy.

So,Β  here is the script.

#GS3 parameters
GS3STORAGETYPE="STANDARD" #leave as "standard", defaults to however bucket is set up

function putGoogleS3
  local path=$1
  local file=$2
  local aws_path=$3
  local bucket="${GS3BUCKET}"
  local date=$(date +"%a, %d %b %Y %T %z")
  local acl="x-amz-acl:private"
  local content_type="application/octet-stream"
  local storage_type="x-amz-storage-class:${GS3STORAGETYPE}"
  local string="PUT\n\n$content_type\n$date\n$acl\n$storage_type\n/$bucket$aws_path$file"
  local signature=$(echo -en "${string}" | openssl sha1 -hmac "${GS3SECRET}" -binary | base64)

  curl --fail -s -X PUT -T "$path/$file" \
    -H "Host: $" \
    -H "Date: $date" \
    -H "Content-Type: $content_type" \
    -H "$storage_type" \
    -H "$acl" \
    -H "Authorization: AWS ${GS3KEY}:$signature" \

It is very similar to my earlier S3 post and usage is exactly the same. The upload speed seems very similar to S3, which is not that surprising as I’d expect their network infrastructure to be a similar scale and capability.

As to which of the two services is better, I haven’t got a clue. I think for a large-scale enterprise user AWS would win every time on the superiority of the tools, stability of platform and the fact that they offer proper guarantees of service etc. Google is much more of a beta product, the docs are full of warnings that the interface could change at any time and there don’t appear to be any warranties.

For a non-pro user the Google cloud storage is easier to use, in AWS-compatibility mode at least, so I think it’s a good choice for backup storage in less mission-critical applications. I’m using it for one of my applications and I haven’t had any issues yet.



I was finished, but I thought I’d just add a quick note on setting bucket lifecycle. First a json config file has to be created with the lifecycle description e.g. this is a file I call lifecycle_del_60d.json:

      "action": {"type": "Delete"},
      "condition": {"age": 60}

Then the gsutil command needs to be run to set lifecycle on a bucket: gsutil lifecycle set lifecycle_del_60d.json gs://my-bucket-name

…and that is that, job done. Files older than 60 days in the bucket will be automatically deleted.


How to Make the WordPress Post Editor Default to Opening Links in a New Tab

This one definitely fits into the “Minor annoyances have a ridiculous amount of effort expended on them” category. See the pic for the minor annoyance; I had to click the “open link in a new tab” checkbox every time I added an external link in a post. One second wasted? Two tops on a slow day pre-coffee.

I should have just shrugged my shoulders and moved on, but that is not how things work in geek-land πŸ˜‰

So; I went searching. I was a bit surprised to discover that there wasn’t a way of setting this default, it seemed obvious. Maybe there was a plugin? Nope15.

Maybe someone else had asked the question? Yes they had and there was a solution … which was wrong as it didn’t work16.

OK, I can see where this is going. I’m quite good at Javascript, so how hard can it be?17

Turns out the solution is harder than you may think and it requires supplying a setup function for TinyMCE18. I found a post on a related topic here which also doesn’t work (it’s for an older version of TinyMCE), but gave me enough clues that I could put the rest of the pieces together myself after reading the V4 docs.

The code snippet below is the solution; I stuck it in a site-specific plugin I already had, but it could go in functions.php or wherever:

add_filter('tiny_mce_before_init', function($initArray){
    //add a setup fn
    $initArray['setup'] = <<<JS
function(editor) {
    //catch the ExecCommand event. The tinyMce mceInsertLink command then
    //triggers a custom WP_Link command
    editor.on('ExecCommand', function (e) {
      //console.debug('Command: ', e);
      if(e.command === 'WP_Link'){
          //release to UI so link value is populated
            var linkSel = document.querySelector('.wp-link-input input');
            if(linkSel.value === ''){
                //no link so set the "Open link in a new tab" checkbox
                //to force the default as checked
                var linkTargetCheckSel = document.querySelector('#wp-link-target');
                linkTargetCheckSel.checked = true;
          }, 0);

    $initArray['setup'] = trim($initArray['setup']); //prevent leading whitespace before fn
    return $initArray;

The trick is to catch the WP_Link event that the editor emits, look at the link value to see if it is blank and if it is, set the “open link in new tab” checkbox so that becomes the default.

Works just fine and would make the lamest plugin ever πŸ™‚

[Update Dec 17 – Bah! one of the WP updates since I created this solution has broken it. I’ll research what happened and try and fix it again when I can be bothered. ]

Upload to AWS S3 from a Bash Script

The Big River’s cloud storage is very good and cheap too, so it is an ideal place to store backups of various sorts. I wanted to do this upload from a Bash script in as simple a way as possible.

I have previous with the API having used it from PHP, both using the AWS SDK and also rolling my own simplified upload function. That wasn’t exactly easy to do19, so I didn’t want to go there again.

A bit of Googling came up with this Github gist by Chris Parsons which is almost exactly what I needed. I just had to parameterise it a bit more and add the facility to specify storage class and AWS region 20.

Here is the finished result:

#S3 parameters

function putS3
  date=$(date +"%a, %d %b %Y %T %z")
  signature=$(echo -en "${string}" | openssl sha1 -hmac "${S3SECRET}" -binary | base64)
  curl -s -X PUT -T "$path/$file" \
    -H "Host: $bucket.${AWSREGION}" \
    -H "Date: $date" \
    -H "Content-Type: $content_type" \
    -H "$storage_type" \
    -H "$acl" \
    -H "Authorization: AWS ${S3KEY}:$signature" \

This works a treat on the small-to-medium sized compressed files that I use it for.

The only real snag with this function is that it doesn’t set an exit code, so it’s not easy to exit a script if the upload fails for some reason. I got round it by grepping the output of the curl and looking for the text “error”. This finds all the cases I tried apart from incorrect region.

The last line is changed to the following:

"https://$bucket.${AWSREGION}$aws_path$file" --stderr - | grep -i "error"
 #the --stderr bit onward is so that a return code is set if curl returns an 
 #error message from AWS
 #note that the value of the code is inverted from usual error interpretation: 
 #0=found "error", <>0=not found "error"

I wasn’t entirely happy with this rather crude workround, but it did achieve the desired result.


I later found and tested the –fail option for Curl that sets the exit code properly. That means the grep hack isn’t needed to allow error detection and the normal testing of the $? bash variable will work.

(I also tidied up and made the function variables proper locals – see the Google cloud version for example Google cloud version)

Fun with Nginx: Getting WordPress to work in a Sub-Directory

Well that was fun. After a marathon google session and an almost infinite number of reloads of Nginx I got wordpress with pretty permalinks working in an aliased sub-directory on my windows test laptop.

(Clue – almost every post explaining how to do this is incorrect. An exercise for the interested reader is to find the one that worked …)

This is the simplified excerpt from a server{} directive. [The wordpress root is mapped from somewhere else into the /wordpress sub-directory using the alias directive.]

location @wp {
  rewrite ^/wordpress(.*) /wordpress/index.php?$1;

location /wordpress {
    alias "c:/webserver/www/websites/wpnm"; #where wp actually is
    try_files $uri $uri/ @wp;

    #other config stuff here....
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_read_timeout 300s;
        fastcgi_param  SCRIPT_FILENAME $request_filename;

        include fastcgi_params;

The key bit is the try_files directive looking for a real physical file and then when it isn’t found the @wp rewrite directive grabs the pretty permalink and stuffs it into a query string tacked on the index.php. This works with all the permalink variations that wp offers.

If WordPress is simply running in the root directory then this is easy-peasy and the re-write isn’t needed – simply stick /index.php?$args as the last term of try_files instead of @wp. That config can easily be found on line

The best resource I found for getting to grips with Nginx’s config was a tutorial on Linode’s website: This explains how the order of processing of the location directives works; essential knowledge to do anything above a very basic level.

Creating these configs isn’t that easy due to the limited debug tools. It’s not easy to see why things don’t work and the logs don’t help.

The hilariously uninformative error messages from the fastcgi module are typical – almost any config mistake produces a blank page with “No input file specified.”. Niiiice πŸ™‚ Β Good luck finding out just what the incorrect parameter was πŸ˜‰

The best debug suggestions I have found are in this blog post: I had a bit of fun since I started withΒ $document_root$fastcgi_script_name as the SCRIPT_FILENAME as the Nginx docs suggest, but that doesn’t work when using aliases.

Of course, that little gem is not well documented…..