Docker and MySQL: the ups and downs


Docker and MySQL will play happily together. But there’s a few gotchas.

On a production system I manage, I run MySQL as a Docker container, in a private network shared between other containers that need to access it. As a result, we benefit from network and container isolation. However there are downsides.

The network isolation is great from a security perspective as MySQL isn’t exposed to the public at all.

Having it segregated in a Docker container gets all the benefits of container isolation, e.g. an exploit against another container would only give an attacker a connection to the container. It would not allow access to the data directly. Provided each container has a credential and database an attack on one container would at worst allow access at the privilege level of the container.

Separating the MySQL data into a volume lets you upgrade the version of MySQL fairly easily. You can just re-create the container provided you separated the data out as a volume. (and perhaps run mysql_upgrade afterwords)

However, Docker out of the box can completely destroy your database. The problem arises from how Docker kills a container. By default, Docker sends a SIGTERM and waits 10 seconds. If the process does not end by 10 seconds, Docker proceeds to effectively kill -9 (SIGKILL) the main process inside the container. I honestly fail to see why this timeout is so low by default. 10 seconds isn’t really enough time to even stop a small-sized database.

Anyone familiar with MySQL can see that this a problem. The SIGTERM sent by Docker causes MySQL to start cleaning up and writing to disk anything which was in memory. On a large database, this can take a few minutes. Interrupting this is BAD, which Docker proceeds to do 10 seconds after! If you’re lucky, and most likely most of the time, MySQL will recover.

This is a very large downside which by default isn’t exactly advertised. Oddly, the default MySQL and MariaDB images do not mention on their Docker Hub or GitHub pages that this should be a consideration. I understand MySQL has a great crash recovery, however I have personally been hit with this problem and can say it isn’t perfect.

This behavior will not only occur whenever you reboot the machine or stop the container, it will also happen without warning during an upgrade of Docker. (at least on CentOS 7)

Docker provides two ways of mitigating this. When creating the container with docker run, you can pass –stop-timeout=6000 which would give the container 100 minutes to gracefully shutdown. I recommend you set this very high as MySQL can be unpredictable in how long it takes to stop.

If you’ve already created the container, you can use docker stop -t 6000 to stop the container with a grace of 100 minutes. I highly recommend that you rebuild the container with an appropriate stop value however.

So as much as Docker provides advantages, there is a huge undocumented behavior that could cause significant damage to a database.

The depressing side of the US from a UK visitor


Myself and my boyfriend recently went to San Francisco. He had a conference there for work so we booked a few days before and after to spend on a holiday, and we had a great time (and the dates worked out cheaper)!

There’s a lot of things that come off as unfortunate and “non-Western” about the US. The Americans constantly make the argument of freedom, but how free are they? I’d like to make the preposition that British people are a lot more free.

The fact that getting into an accident in the US could write off your financial future is so so depressing. It pains me to think this issue hasn’t been resolved. Obamacare was a chance to solve it, but of course, the democrats did nothing to get rid of the actual problem of expensive healthcare and instead try and force and supplement the insurance companies into doing something. This proved to be a failed approach, and Obamacare is rising by triple digit percentages for Americans next year. Not surprising is the fact that these insurance companies have donated so much money to the democrats and the Clinton foundation. Never have I valued the NHS more. The NHS is not great and has numerous issues, but I’d say that their emergency care is unparalleled. At least getting into a spot of trouble in Britain will not set your future into a crash course.

The tipping culture is also a little bit hard to digest. In complete contrast, when I visited Japan, the advice and recommendation is not to tip at all. The Japanese just don’t do it. They believe that if they needed to charge more for a service, they would simply charge more. I completely agree. If your restaurant prices are so low that you can’t pay staff, then you should increase the prices. That is common sense. People should not have to worry about supplementing a wage because the restaurants are not passing along the correct wage to their workers.
As a middle ground, I prefer the UK system. You tip if you receive expectational service, but it is not expected.

To add to what I said about healthcare, the US ads for medicine are just crazy and bizarre. I think almost everywhere else in the world it is expected that your doctor will find the medication you are suppose to be on, and if you have done your own investigation, I guess it wouldn’t be so bad to provide recommendations on medication to your doctor.
This is in complete stark difference to the US in where the pharmaceutical companies advertise directly to the customer. I think almost everyone would agree this is wrong. Especially when you hear that the side effects of some of these medications are — which is explained in the ads as well, so you’d think that would be enough to ignore them.

Very very bizarre. As lovely and beautiful as the US is, I cannot ever see myself living there. The culture is just too different.

Money saving National Rail oddities


I’ve encountered some money-saving oddities on National Rail, and I thought I’d list them here. They can be extremely handy if you travel across the country a lot.

  • London to Edinburgh is sometimes cheaper via Liverpool.
    The reason is that you can completely avoid Virgin Trains via this route, and instead use the following train companies:
    book with London Midland – London Euston to Liverpool Lime St (1 change at Stafford or Crewe)
    book with First Transpennine Express – Liverpool Lime St to Edinburgh (1 change at Preston, via Northern Rail)
    A hell of a journey, but you can get this down to as cheap as £20.
    Basically, extending your journey to avoid expensive train companies and going with cheaper, local companies can save you a lot of money.
  • There is a hidden first class advance ~£55 Virgin Trains West Coast ticket from London Euston to Edinburgh, usually daily.
    There’s a fairly hidden ticket here, and it’s usually cheaper than standard class. It is completely hidden from the Virgin Trains East Coast site, TheTrainLine and sometimes Virgin Trains West Coast’s own site.
    To find this ticket, use the ScotRail booking site and search for tickets from London Euston to Edinburgh at around 10:00am.
    Some companies have hidden or difficult to find tickets, that you won’t find on their website. I can’t explain why, but I think it’s because they are required to sell the ticket, but try to hide it on their own booking websites. Perhaps they are allowed to do this (?).
  • There is no proper verification for a 16-25 railcard. You can generate a random passport number to get past the age verification to apply for a 16-25 railcard. I have no need for this, as I’m young enough, but there are sites that can generate a passport number that is convincing enough (and I doubt any country allows National Rail to actually verify/check them).
    This is probably against the terms of conditions are possibly fraudulent.
  • (Valid until the end of May 2016) Virgin Trains East Coast Plane Relief. If you make a convincing enough entry before end of May, you can get a £15 standard class one-way or £30 first class on-way ticket from Edinburgh to London. Follow the instructions here.

All of these oddities were valid as of April 2016. Leave a word if one of them helps you!

What does Facebook have against Christmas?


This turned into a bit of a rant! My apologies!

If you were on Facebook during Christmas time, you probably noticed something was lacking from the ‘Trending’ part of the site, which is usually on the right-hand side. Christmas.
I don’t think this can be attributed from lack of user activity, because at least in my circle of friends, and my partner’s circle of friends, there were many Christmas messages and photos being passed about.
And it’s trending on Twitter, still, as of Boxing Day.

So, what does Facebook have against Christmas? And is this a thing that is exclusive to the company? Or is it a bigger problem in Silicon Valley and attack on Christianity?
Why was Facebook happy to add Eid and Ramadan onto that list, but not Christmas?

I’ll start by expressing my opinion that there is a global attack on Christianity. I’m not even Christian myself, but my kindest friends in life have been Christians. The majority of modern Christians I’ve met have a ‘live and let live’ attitude to life, and it’s hard to compare other world religions to it. The closest parallels I can probably draw are to Buddhism. Both religions share the same level of passiveness and kindness, with very little doctrine-based hate crime or attacks coming from either.

Of course though, to the Buzzfeed and Tumblrite public, Christianity is seen as an oppressive and evil religion. Why? Because they see Christians as running all the current existing power structures.
Well, is that even true? Not really. But you wouldn’t get elected as president of the U.S. or a senate-grade position without saying you are.

When a Christian does deviate from what is politically correct, the media is quick to jump on them and call them every name under the sun. When we see similar behavior from “less-quiet” religions like Islam, we see them portrayed as the victim or the media simply ignoring it, if there is no way to make them appear victimized.

Every year, we see that Christian holidays consistently eroded.
Well, we shouldn’t celebrate Christmas should we, that’s a bit racist? What about religious group X? Well, it’s just consumerism isn’t it?
To the point, of course, that it’s becoming politically incorrect to say Merry Christmas, and instead we say Seasons Greetings or something similar.
The leader of the Labour Party, Jeremy Corbyn, didn’t even issue a Christmas message or broadcast this year, didn’t say squat about it. But he issued such a message for Eid.

So really, Facebook censoring Christmas from their trending topics entirely is just the next progressive step.
Will Google do the same next year?

Two-factor SSH authentication with U2F hardware security key


I’m shocked at the lack of documentation on using a U2F security key for two-factor SSH authentication. Luckily, it isn’t too difficult, thanks to the fact that Yubico has done a lot of the hard work for us.

A prerequisite for this guide is compiling and installing the module. Luckily, there is a fair amount of documentation online on how to do this. You’ll also need a system that can run the u2f-host command. Instructions for getting set with these are here and here, respectively. You will need both of these for both the server andclient.

If you are working with a remote system which you do not have physical access to, I recommend making sure you have a backup SSH session open at all times, as failings in the SSHd or the PAM configuration can cause you to be locked out. You are warned.

First, come up with a unique name for the system you wish to SSH in to. This doesn’t have to be wholly unique, just make sure it doesn’t collide with anything else on the key, really.
For this example, we’ll use backupserver.
You’ll also need to know which account you wish to secure. For this example, we’ll use root.
For this tutorial, I used a MacOS X system to generate the key and responses, and secured a CentOS 7 system. Instructions should be easily adaptable to other operating systems.

Registering the key
We now need to register a new key with your U2F device. In order to do this, we’ll use pamu2fcfg.

On the system with the U2F device connected, run the following command:
pamu2cfg -o pam://backupserver -i pam://backupserver -u root
Make sure to change ‘backupserver’ and ‘root’ to what you decided earlier.
Your U2F device should then start flashing. Touch it to continue.
After you’ve touched the device, you’ll be left with something similar to this on the command line:
This is the U2F key definition. The first part is the username the key is for, and the second part contains both the key handle and the user key. These are seperated by commas.

Configuring the U2F definition
Now login to the server you are wanting to protect, and execute the following command, making sure to replace the data with the output you got from the above command (in quotes). Example:
echo 'root:cCaQUJyqmU68z6KVAdpyk2871wuRVtgDbXCKq5A8zqBj577vdUhdLKeOQG4l9yG-t7Ze8wgtsdF7l0GEjO0g-A,04f32cb3dd0b0b1ac20517050ab1dd2700f410f638675c7ebe18b9607459171c8b82de6d1a1d5d25429157db43943033b741c0c376c375ce460628bd7464677fb4' > /etc/u2f_mappings

This will create a file which we’ll use to tell the PAM module to find your key information.

Configuring SSH
Next, let’s prepare the SSHd configuration. We have to ensure to turn on ChallengeResponseAuthentication and we must make sure SSHd is set to use PAM.
Open up /etc/ssh/sshd_config using your favourite editor.
Make sure to set the following:

ChallengeResponseAuthentication yes

It is probably already defined, so search for it and set it to Yes, or uncomment it.
And also make sure UsePAM is set to Yes.

UsePAM Yes

Now reload SSHd:

systemctl reload sshd

Configuring PAM
Next we need to modify the PAM SSH configuration. Open up /etc/pam.d/sshd. It should look a bit like this:

auth required
auth substack password-auth
auth include postlogin
account required
account include password-auth
password include password-auth
# close should be the first session rule
session required close
session required
# open should only be followed by sessions to be executed in the user context
session required open env_params
session optional force revoke
session include password-auth
session include postlogin


auth substack password-auth

Add the following, replacing backupserver with the name you chose above:

auth required origin=pam://backupserver appid=pam://backupserver authfile=/etc/u2f_mappings manual

This configuration will ensure that the security key challenge-response is done after the password authentication.

Testing and Logging In
Now, it’s time to test this configuration. Log in to the server in a new session to test if it works.
After you supply your password, you should be met with the following prompt.

Now please copy-paste the below challenge(s) to 'u2f-host -aauthenticate -o pam://backupserver'
{ "keyHandle": "Rr3ZPpbO6fuCF1_w1RgxY2Ft6FpZ3CtABHgeySshQpH470dbVlZCbFIjUEzut3JxQrCoUWJfo4YKjAxJxL2MwA", "version": "U2F_V2", "challenge": "zwkjaDcNdfjMSqOKIq84ZsF9MlS8M0P3rmHCB7PSuKo", "appId": "pam:\/\/backupserver" }
Now, please enter the response(s) below, one per line.

On your system, run the following — replacing the u2f-host command and challenge with what you are asked for:

echo '{ "keyHandle": "Rr3ZPpbO6fuCF1_w1RgxY2Ft6FpZ3CtABHgeySshQpH470dbVlZCbFIjUEzut3JxQrCoUWJfo4YKjAxJxL2MwA", "version": "U2F_V2", "challenge": "zwkjaDcNdfjMSqOKIq84ZsF9MlS8M0P3rmHCB7PSuKo", "appId": "pam:\/\/backupserver" }' | u2f-host -aauthenticate -o pam://backupserver

You should get a response like this:

{ "signatureData": "AQAAADwwRQIhANLPROgaGa5NYrTBY8qK8bCuy3Vc_LI6wZdrFhquyt0lAiA4ppzdGv277g853EW1TKJMC788TwxWV4SOPUPltDGyYA==", "clientData": "eyAiY2hhbGxlbmdlIjogIm1DTDBxel9aV1NvR1pHcElSZk1GYm5WSkcyNHBGMG1tTjR4Z2s4Z3VIU3MiLCAib3JpZ2luIjogInBhbTpcL1wvYmFja3Vwc2VydmVyIiwgInR5cCI6ICJuYXZpZ2F0b3IuaWQuZ2V0QXNzZXJ0aW9uIiB9", "keyHandle": "Rr3ZPpbO6fuCF1_w1RgxY2Ft6FpZ3CtABHgeySshQpH470dbVlZCbFIjUEzut3JxQrCoUWJfo4YKjAxJxL2MwA" }

Paste that into your SSH session, and press Enter. Provided you supplied a correct password and challenge response, you should be logged in.

Getting AMD switchable graphics working on Hackintosh OS X


So this needs a little bit of backstory —
A few months ago, my laptop had died, so I decided to borrow my boyfriend’s old MacBook Pro for a few weeks.
I loved it. I didn’t exactly love the MacBook Pro, but I absolutely loved OS X. The developer tools for my type of work (web development) are just frankly 10 years ahead of their Windows counterparts. It looks far more visually appealing than Windows or even Linux.

So, when I fixed my laptop, I couldn’t settle for Windows. I didn’t want to go out and buy a MacBook Pro — as the hardware I have is already future proof for the next 3 years.

I investigated the Hackintosh community, and to my amazement almost all the hardware was supported out of box and is well documented. But absolutely everyone said that AMD switchable graphics do not work and there is no way to get it to work.
For me to use a system full-time, it has to have full hardware support. This is the main reason why I don’t use Linux. I have to always settle for “oh it works but not correctly”.

I got a copy of the AMD card VBIOS by using the following tool to dump the system BIOS. From reading around, I saw that the VBIOS is around ~63K in size, so I found the VBIOS through trial and error.
I then gave this to Clover (the Hackintosh UEFI boot utility).

This seemed to work. Although I could tell the graphics card was actually Screen Shot 2015-08-31 at 18.33.38initalized by OS X, I couldn’t use it at all. This made me think there must be a way of tricking Apple’s Mux control into activating it.
Some users’ had reported success with tricking the system to use it through some EDID forcing, but this cannot work on my hardware as I know the display goes through the Intel card and there is no way to switch this.

Getting curious, I looked around for a tool that would show more information about the graphical environment.Screen Shot 2015-08-31 at 18.37.09
I found that the card was there, and was available! I was even able to select it and render through it. The FPS was significantly higher than the Intel card, which shows conclusively that there is some rendering going on at the dGPU level.

This made me think of the following problems:

  • This card will only EVER work with OpenGL applications that allow offline renderers. As with this system there is no Mux that allows it to be entirely switched.
  • I will need some way of forcing EVERY OpenGL application to use the AMD renderer. This will probably result in a low-level library hook.

Problem 1 is not able to solved easily, as far as I’m aware, and I’m not willing to waste any time on it.

Problem 2 however, was slightly easier and could be accomplished.
Digging around the Apple OpenGL documentation, I found a function that does what I need to do. CGLSetVirtualScreen()
The problem is calling this at the right time, and inside each targetted application.
Reading further into the Apple developer documentation, I found that even OpenGL application has to call CGLCreateContext() to get an OpenGL context. The sensible way of doing this would be to hook that function, then calling the original, then setting the virtual screen to what we want, then return back to the original application.

And.. it worked!
Screen Shot 2015-08-30 at 19.15.56
I haven’t been able to get anything but Chrome and Chess working with this (mostly as I haven’t tried anything else), and I can’t be bothered writing some system-level thing to do this with every application, someone else can do this if they can be bothered.

The full source code is here in zip form, and includes a build script and an example to run in Chrome. You’ll probably have to modify both the launcher script and the source for your system.
If you cannot do this yourself, then contact me and I can do at a price.

We’ve spent years fighting the gender apartheid, now the Left takes us back


I feel like feminism has honestly achieved a lot in erasing gender segeration in the West.

Well, kind of. When I say feminism, I mean the true moment of gender equality. I don’t for any second mean the whiny Tumblrites who complain about air conditioning.

Feminism has been very successful in promoting STEM fields to girls. As a result, STEM fields each year become closer and closer to a true 50/50 gender balance.
It’s also been very successful in rebalancing the gender earnings gap, with young women now being paid more than their male counterparts.

We’re starting to see gender neutral toilets introduced a lot more, because who would have thought — when people go to the toilet, usually they just want to do their business and leave.

We have mixed-gender schooling, although it is constantly under attack by the feral far-left. We already know that mixed-sex schooling works better here.

The latest attack on equality is by Jeremy Corbyn’s labour party (let’s be honest, we all know how the vote is going to turn out) on mixed-gender train carriages. And, to a point, I can actually understand.
If you’ve ever been on the London Underground (LU) or any busy train during rush hour, you know how personal it can get.
Sometimes people exploit the closeness. Sometimes this exploitation is full on sexual assault.
Since we know that most of the aggressors are male, a quick solution, of course, would be gender-segerated carriages.
Some religions also have rules dictating against contact with the other sex — however I don’t see this as a reason whatsoever to defend this.

Gender segerated carriages, however, wouldn’t solve the fact that some perpetrators are female. And that some victims are male.
Do we then have male only carriages, to protect men from women?Or how about our own personal carriages, because I dicate that I cannot be around anyone?
And for trans-folk, do we ask for IDs before they can enter these gender-segerated carriages? How do we deal with intersex? Will it lead to situations like this?
How do we know how many womens-only carriages to put on? Is it 50% of all the train space? Do we actually expect people to follow these rules on over-capacity services like the LU?

I can see us getting more and more segerated as we import people with third world views and ideologies into Britain enmass — those on the extreme left want to make Britain as comfortable as they can to migrants, and the best way of doing so is to replicate their oppressive culture.
A good example template state for Corbyn might be Sweden, the far-left literal rape capital of Europe.

We have to defend what people have fought so hard to build. We can’t just let years of progress wash down the drain, in search of quick solutions.

Conditional resource caching in nginx


I ran into a problem with one of our sites today– we got promoted from a very popular YouTuber.
Google Analytics was recording around ~900 people active on our site in real-time.

Although we are prepared for some degree of traffic deviation, this was way above what we were prepared for.

After some tuning of the TCP/IP stack, activating certain performance mode features of our board software and doing some on-the-fly adjustments to our MySQL server we got stuff running again, albeit with massive lag.

One of the major problems was that our board software, in their infinite wisdom, decide to serve some screenshots via PHP instead of going through the filesystem.

REDACTED - - [21/Mar/2015:16:22:41 -0400] "GET /index.php?app=downloads&module=display&section=screenshot&id=7324 HTTP/1.0" 200 2705 "" "Mozilla/5.0 (Linux; Android 4.1.2; REDACTED"
REDACTED - - [21/Mar/2015:16:22:41 -0400] "GET /index.php?app=downloads&module=display&section=screenshot&id=7324 HTTP/1.0" 200 2705 "" "Mozilla/5.0 (Windows NT 6.1; REDACTED"
REDACTED - - [21/Mar/2015:16:22:41 -0400] "GET /index.php?app=downloads&module=display&section=screenshot&id=7320 HTTP/1.0" 200 3250 "" "Mozilla/5.0 (Linux; Android 4.1.2; REDACTED"
REDACTED - - [21/Mar/2015:16:22:41 -0400] "GET /index.php?app=downloads&module=display&section=screenshot&id=7330 HTTP/1.0" 200 4034 "" "Mozilla/5.0 (Windows NT 6.2; REDACTED"
REDACTED - - [21/Mar/2015:16:22:41 -0400] "GET /index.php?app=downloads&module=display&section=screenshot&id=7319 HTTP/1.0" 200 14571 "" "Mozilla/5.0 (Linux; REDACTED"
REDACTED - - [21/Mar/2015:16:22:41 -0400] "GET /index.php?app=downloads&module=display&section=screenshot&id=7329 HTTP/1.0" 200 26334 "" "Mozilla/5.0 (Windows NT 6.2; REDACTED"
REDACTED - - [21/Mar/2015:16:22:41 -0400] "GET /index.php?app=downloads&module=display&section=screenshot&id=7322 HTTP/1.0" 200 25411 "" "Mozilla/5.0 (Linux; REDACTED"

This is one part of our site we hadn’t actually cached before, and was now becoming a pressing problem and the root cause (!) of everything else being slow. It seems that not only is this call expensive as it is using a PHP worker, but it’s also apparently quite expensive for the database (!).

Unfortuantly, the location block in nginx does not support GET/POST variables. The approach of using an if block here to dynamically set proxy_cache cannot be used either (as nginx disallows this).

Instead, I managed to get the following workaround working, see the comments:

location / {
    set $nocache 1; # Make sure, by default we don't cache any response.
    if ($args ~ section=screenshot) { # Is this a screenshot?
        set $nocache 0; # Set this to 0, ensuring caching later.
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_redirect off;
    proxy_buffering on;
    proxy_set_header        Host            $host;
    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        Aceept          "";
    proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie;
    proxy_cache STATIC;
    proxy_cache_key "$scheme$request_method$host$request_uri";
    proxy_cache_valid 200 6h;
    proxy_cache_bypass $nocache; # If nocache is 1, bypass cache...
    proxy_no_cache $nocache; # If nocache is 1, bypass cache...
    proxy_pass  http://backend;
    include g17upstream-location-common.conf;

This solution will mean nginx will cache the response whenever it sees the $nocache variable set to 0. Since the default state for $nocache is 1, responses will not be explictly cached.  Extending this solution is as simple as adding more if statements to set $nocache to 0.

I hope this helps someone out, as this approach (although slightly ugly) is the best I can think of and I haven’t seen anyone else documenting this.

An update: TetherUnblocked


Long time no post.

I’m still working on TetherUnblocked, although I’m changing the name and the general structure of the application to a app that will completely transform any Android mobile device into a router.

This means the new app will include:

  • Port Forwarding
  • Web Interface
  • Improved Logging

Expect an updated version of the app (with the licensing stuff removed) soon. I might actually release it on Google Play.

P.S. Three have updated their blocking (in the past few days) to also initate a block whenever a request to “” is made.
Although my app doesn’t fix this (yet), you can fix this yourself by blocking that domain in your hosts file or blocking it on a custom local DNS server.

The problem with CloudFlare Free SSL


CloudFlare recently introduced a ‘Free SSL’ service. Straight off the bat, this sounds great for website owners. It’s basically the service that they’ve been offering to their pro users for a while. A chance for organisations and websites to use SSL without knowing the slightest about security.

It is fundamentally flawed and shows the problem with the centralized authority system we have right now. However, we’ve never had a system where it is incredibly easy to serve HTTPS sites before. Yes, we’ve had “free” certificate authorties like Startcom, but they are known for doing a lot of manual verification and validate WHOIS details. CloudFlare Free SSL is the final bullet in this ridculous system.

Phishing. Fraudsters and phishers love the new service. It means they can setup a fraudulent website very quickly, and without any verification apart from that they can change the DNS records of the domain, instantly getting that padlock that we’ve been telling people is great for years.

False sense of security. One of the major reasons more and more organisations and website owners are flocking to SSL is because it protects against interception. Flexible SSL is a CloudFlare solution which works by adding security between the user and CloudFlare, but not between CloudFlare and the server. Anyone who is on any hop between CloudFlare and the origin server can listen in, and you bet that probably includes your buddies the NSA/GCHQ.
The very annoying part of this is you’ve got absolutely no idea if a website you are connecting to is using this Flexible SSL, so you’ve got absolutely no way of trusting that padlock anymore.

Hacking. If someone discovers your CloudFlare username and password, they can change the origin server to somewhere else. They could change the origin server to a reverse proxy server that logs everything and then passes it on to the real server. You as a user would see absolutely nothing. The site owner might not even figure it out, as it looks like everything is fine and well. Without CloudFlare, an attacker would at the very least have to get a new certificate issued, or hack the server and steal the private key. Now, they don’t have to.

What could CloudFlare do? There are a few things CloudFlare could do to make their Free SSL service not suck as much.
I would recommend getting rid of Flexible SSL, or at least add a warning to the user that your traffic could still be intercepted.
They could do more manual verification of new accounts, and domains that look suspicious should not be issued a certificate.
I’ll admit, there’s not much they can do about the hacking aspect, although I’d personally require users to use two factor authentication to activate their SSL options.

It should be noted I use SSL as it’s the industry standard term. Nowadays, it mostly refers to newer TLS technology.