Rob AllenDependency Injection with OpenWhisk PHP (20.6.2018, 10:02 UTC)

Any non-trivial PHP applications use various components to do its work, from PDO though to classes from Packagist. It's fairly common in a standard PHP application to use Dependency Injection to configure and load these classes when necessary. How do we do this in a serverless environment such as OpenWhisk?

This question comes up because we do not have a single entry point into our application, instead we have one entry point per action. If we're using Serverless to write an API, then we probably have a set of actions for reading, creating, updating and deleting resources all within the same project.

Consider a project that uses PDO to communicate with the database. The PDO object will need to be instantiated with a DSN string containing the host name, database, credentials etc. Its likely that these will be stored in the parameters array that's passed to the action and set up either as package parameters or service bindings if you're using IBM Cloud Functions.

For this example, I have an ElephantSQL database set up in IBM 's OpenWhisk service and I used Lorna Mitchell's rather helpful Bind Services to OpenWhisk Packages article to make the credentials available to my OpenWhisk actions.

Using a PDO instance within an action

Consider this action which return a list of todo items from the database. Firstly we instantiate and configure the PDO instance and then create a mapper object that can fetch the todo items:

function main(array $args) : array
{
    if (!isset($args['__bx_creds']['elephantsql']['uri'])) {
        throw new Exception("ElephantSQL instance has not been bound");
    }
    $credentials = parse_url($args['__bx_creds']['elephantsql']['uri']);

    $host = $credentials['host'];
    $port = $credentials['port'];
    $dbName = trim($credentials['path'], '/');
    $user = $credentials['user'];
    $password = $credentials['pass'];

    $dsn = "pgsql:host=$host;port=$port;dbname=$dbName;user=$user;password=$password";

    $pdo = new PDO($dsn);
    $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

    // now we can use $pdo to interact with our PostgreSQL database via a mapper
    $mapper = new TodoMapper($pdo);
    $todos = $mapper->fetchAll();

    return [
        'statusCode' => 200,
        'body' => $todos,
    ];
}

That's quite a lot of set up code that clearly doesn't belong here, especially as we need to do the same thing in every action in the project that connects to the database. We are also going to probably put our database access code in a mapper class that takes the PDO instance as a dependency, so to my mind, it makes sense to use a DI container in our project.

I chose to use the Pimple DI container, because it's nice, simple and fast.

To use it, I extended Pimple\Container and added my factory to the constructor:

<?php
namespace App;

use InvalidArgumentException;
use PDO;
use Pimple\Container;

class AppContainer extends Container
{
    public function __construct(array $args)
    {
        if (!isset($args['__bx_creds']['elephantsql']['uri'])) {
            throw new InvalidArgumentException("ElephantSQL instance has not been bound");
        }
        $credentials = parse_url($args['__bx_creds']['elephantsql']['uri']);

        /**
         * Factory to create a PDO instance
         */
        $configuration[PDO::class] = function (Container $c) use ($credentials) {
            $host = $credentials['host'];
            $port = $credentials['port'];
            $dbName = trim($credentials['path'], '/');
            $user = $credentials['user'];
            $password = $credentials['pass'];

            $dsn = "pgsql:host=$host;port=$port;dbname=$dbName;user=$user;password=$password";

            $pdo = new PDO($dsn);
            $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
            return $pdo;
        };

        /**
         * Factory to create a TodoMapper instance
         */
        $configuration[TodoMapper::class] = function (Container $c) : TodoMapper {
            return new TodoMapper($c[PDO::class]);
        };

        parent::__construct($configuration);
    }

}

In this code, we create two factories: one to create the PDO instance and one to create a mapper class called TodoMapper. The TodoMapper class composes a PDO instance via its constructor, so in the factory for the TodoMapper, we retrieve the PDO instance from the container.

This nice thing about doing it this way is that if an action doesn't use a TodoMapper, then the connection to the

Truncated by Planet PHP, read more at the original (another 906 bytes)

Link
SitePoint PHPBuilding an Image Gallery Blog with Symfony Flex: Data Testing (19.6.2018, 18:00 UTC)

In the previous article, we demonstrated how to set up a Symfony project from scratch with Flex, and how to create a simple set of fixtures and get the project up and running.

The next step on our journey is to populate the database with a somewhat realistic amount of data to test application performance.

Note: if you did the “Getting started with the app” step in the previous post, you've already followed the steps outlined in this post. If that's the case, use this post as an explainer on how it was done.

As a bonus, we'll demonstrate how to set up a simple PHPUnit test suite with basic smoke tests.

More Fake Data

Once your entities are polished, and you've had your "That's it! I'm done!" moment, it's a perfect time to create a more significant dataset that can be used for further testing and preparing the app for production.

Simple fixtures like the ones we created in the previous article are great for the development phase, where loading ~30 entities is done quickly, and it can often be repeated while changing the DB schema.

Testing app performance, simulating real-world traffic and detecting bottlenecks requires bigger datasets (i.e. a larger amount of database entries and image files for this project). Generating thousands of entries takes some time (and computer resources), so we want to do it only once.

We could try increasing the COUNT constant in our fixture classes and seeing what will happen:

// src/DataFixtures/ORM/LoadUsersData.php
class LoadUsersData extends AbstractFixture implements ContainerAwareInterface, OrderedFixtureInterface
{
    const COUNT = 500;
    ...
}

// src/DataFixtures/ORM/LoadGalleriesData.php
class LoadGalleriesData extends AbstractFixture implements ContainerAwareInterface, OrderedFixtureInterface
{
    const COUNT = 1000;
    ...
}

Now, if we run bin/refreshDb.sh, after some time we'll probably get a not-so-nice message like PHP Fatal error: Allowed memory size of N bytes exhausted.

Apart from slow execution, every error would result in an empty database because EntityManager is flushed only at the very end of the fixture class. Additionally, Faker is downloading a random image for every gallery entry. For 1,000 galleries with 5 to 10 images per gallery that would be 5,000 - 10,000 downloads, which is really slow.

There are excellent resources on optimizing Doctrine and Symfony for batch processing, and we're going to use some of these tips to optimize fixtures loading.

First, we'll define a batch size of 100 galleries. After every batch, we'll flush and clear the EntityManager (i.e., detach persisted entities) and tell the garbage collector to do its job.

To track progress, let's print out some meta information (batch identifier and memory usage).

Note: After calling $manager->clear(), all persisted entities are now unmanaged. The entity manager doesn't know about them anymore, and you'll probably get an "entity-not-persisted" error.

The key is to merge the entity back to the manager $entity = $manager->merge($entity);

Without the optimization, memory usage is increasing while running a LoadGalleriesData fixture class:

> loading [200] App\DataFixtures\ORM\LoadGalleriesData
100 Memory usage (currently) 24MB / (max) 24MB
200 Memory usage (currently) 26MB / (max) 26MB
300 Memory usage (currently) 28MB / (max) 28MB
400 Memory usage (currently) 30MB / (max) 30MB
500 Memory usage (currently) 32MB / (max) 32MB
600 Memory usage (currently) 34MB / (max) 34MB
700 Memory usage (currently) 36MB / (max) 36MB
800 Memory usage (currently) 38MB / (max) 38MB
900 Memory usage (currently) 40MB / (max) 40MB
1000 Memory usage (currently) 42MB / (max) 42MB

Memory usage starts at 24 MB and increases for 2 MB for every batch (100 galleries). If we tried to load 100,000 galleries, we'd need 24 MB + 999 (999 batches of 100 galleries, 99,900 galleries) * 2 MB = ~2 GB of memory.

After adding $manager->flush() and gc_collect_cycles() for every batch, removing SQL logging with $manager->getConnection()->get

Truncated by Planet PHP, read more at the original (another 5400 bytes)

Link
Evert PotKetting 2.0 release (19.6.2018, 16:00 UTC)

Being fresh out of a job, I had some time to work on a new release of the Ketting library.

The Ketting library is meant to be a generic HATEOAS library for Javascript using a simple, modern API. Currently it only supports the HAL and HTML formats, but I’m curious what other formats folks are interested to see support for.

An example:

const ketting = new Ketting('https://example.org/bookmark');
const author = await ketting.follow('author');
console.log(await author.get());

For the 2.0 release, the biggest change I’ve made is that everything is now converted to TypeScript. TypeScript is so great, I can’t really imagine writing any serious javascript without it anymore.

Most of the sources are also upgraded to use modern javascript features, such as async/await, for...of loops and const/let instead of var.

A small bonus feature is the addition of the .patch() method on resoures, which provides a pretty rudimentary shortcut to doing PATCH request. I kept it extremely basic, because I wanted to figure out first how users like to use this feature first before over-engineering it.

Interested? Go check out the project and documentation on Github, or go download it off npmjs.com.

Link
Matthias NobackDoctrine ORM and DDD aggregates (19.6.2018, 07:00 UTC)

I'd like to start this article with a quote from Ross Tuck's article "Persisting Value Objects in Doctrine". He describes different ways of persisting value objects when using Doctrine ORM. At the end of the page he gives us the following option - the "nuclear" one:

[...] Doctrine is great for the vast majority of applications but if you’ve got edge cases that are making your entity code messy, don’t be afraid to toss Doctrine out. Setup an interface for your repositories and create an alternate implementation where you do the querying or mapping by hand. It might be a PITA but it might also be less frustration in the long run.

As I discovered recently, you don't need an edge case to drop Doctrine ORM altogether. But since there are lots of projects using Doctrine ORM, with developers working on them who would like to apply DDD patterns to it, I realized there is probably an audience for a few practical suggestions on storing aggregates (entities and value objects) with Doctrine ORM.

Designing without the ORM in mind

When you (re)learn how to design domain objects using Domain-Driven Design patterns, you first need to get rid of the idea that the objects you're designing are ever going to be persisted. It's important to stay real about your domain model though; its state definitely needs to be persisted some day, or else the application won't meet its acceptance criteria. But while designing, you should not let the fact that you're using a relational database get in the way. Design the objects in such a way that they are useful, that you can do meaningful things with them, and that they are trustworthy; you should never encounter incomplete or inconsistent domain objects.

Still, at some point you're going to have to consider how to store the state of your domain objects (after all, your application at one point is going to shut down and when it comes up, it needs to have access to the same data as before it was restarted). I find that, when designing aggregates, it would be best to act as if they are going to be stored in a document database. The aggregate and all of its parts wouldn't need to be distributed across several tables in a relational database; the aggregate could just be persisted as one whole thing, filed under the ID of the aggregate's root entity.

More common however is the choice for a relational database, and in most projects such a database comes with an ORM. So then, after you've carefully designed your aggregate "the right way", the question is: how do we store this thing in our tables? A common solution is to dissect the aggregate along the lines of its root entity and optionally its child entities. Consider an example from a recent workshop: we have a purchase order and this order has a number of lines. The PurchaseOrder is the root entity of the aggregate with the same name. The Line objects are the child entities (i.e. they have an identity - a line number - which is only unique within the aggregate itself). PurchaseOrder and Line all have value objects describing parts or aspects of these entities (i.e. the product ID that was ordered, the quantity that was ordered, the supplier from whom it was ordered, and so on). This would be a simplified version of PurchaseOrder and Line:

<?php

final class PurchaseOrder
{
    /**
     * @var PurchaseOrderId
     */
    private $id;

    /**
     * @var SupplierId
     */
    private $supplierId;

    /**
     * @var Line[]
     */
    private $lines = [];

    private function __construct(
        PurchaseOrderId $purchaseOrderId,
        SupplierId $supplierId
    ) {
        $this->id = $purchaseOrderId;
        $this->supplierId = $supplierId;
    }

    public static function create(
        PurchaseOrderId $purchaseOrderId,
        SupplierId $supplierId
    ): PurchaseOrder
    {
        return new self($purchaseOrderId, $supplierId);
    }

    public function addLine(
        ProductId $productId,
        OrderedQuantity $quantity
    ): void
    {
        $lineNumber = count($this->lines) + 1;

        $this->lines[] = new Line($lineNumber, $productId, $quantity);
    }

    public function purchaseOrderId(): PurchaseOrderId
    {
        return $this->id;
    }

    // ...
}

final class Line
{
    /**
     * @var int
     */
    private $lineNumber;

    /**
     * @var ProductId
     */
    private $productId;

    /**
     * @var OrderedQuantity
     */
    private $quantity;

    public function __construct(
        int $lineNumber,
        ProductId $productId,
 

Truncated by Planet PHP, read more at the original (another 13128 bytes)

Link
SitePoint PHPBuilding an Image Gallery Blog with Symfony Flex: the Setup (18.6.2018, 18:00 UTC)

This post begins our journey into Performance Month's zero-to-hero project. In this part, we'll set our project up so we can fine tune it throughout the next few posts, and bring it to a speedy perfection.


Now and then you have to create a new project repository, run that git init command locally and kick off a new awesome project. I have to admit I like the feeling of starting something new; it's like going on an adventure!

Lao Tzu said:

The journey of a thousand miles begins with one step

We can think about the project setup as the very first step of our thousand miles (users!) journey. We aren't sure where exactly we are going to end up, but it will be fun!

We also should keep in mind the advice from prof. Donald Knuth:

Premature optimization is the root of all evil (or at least most of it) in programming.

Our journey towards a stable, robust, high-performance web app will start with the simple but functional application --- the so-called minimum viable product (MVP). We'll populate the database with random content, do some benchmarks and improve performance incrementally. Every article in this series will be a checkpoint on our journey!

This article will cover the basics of setting up the project and organizing files for our Symfony Flex project. I'll also show you some tips, tricks and helper scripts I'm using for speeding up the development.

What Are We Building?

Before starting any project, you should have a clear vision of the final destination. Where are you headed? Who will be using your app and how? What are the main features you're building? Once you have that knowledge, you can prepare your environment, third-party libraries, and dive into developing the next big thing.

In this series of articles, we'll be building a simple image gallery blog where users can register or log in, upload images, and create simple public image galleries with descriptions written in Markdown format.

We'll be using the new Symfony Flex and Homestead (make sure you've read tutorials on them, as we're not going to cover them here). We picked Flex because Symfony 4 is just about to come out (if it hasn't already, by the time you're reading this), because it's infinitely lighter than the older version and lends itself perfectly to step-by-step optimization, and it's also the natural step in the evolution of the most popular enterprise PHP framework out there.

All the code referenced in this article is available at the GitHub repo.

We're going to use the Twig templating engine, Symfony forms, and Doctrine ORM with UUIDs as primary keys.

Entities and routes will use annotations; we'll have simple email/password based authentication, and we'll prepare data fixtures to populate the database.

Getting Started with the app

To try out the example we've prepared, do the following:

  • Set up an empty database called "blog".
  • Clone the project repository from GitHub.
  • Run composer install.
  • If you now open the app in your browser, you should see an exception regarding missing database tables. That's fine, since we haven't created any tables so far.
  • Update the .env file in your project root directory with valid database connection string (i.e., update credentials).
  • Run the database init script ./bin/refreshDb.sh and wait until it generates some nice image galleries.
  • Open the app in your browser and enjoy!

After executing bin/refreshDb.sh you should be able to see the home page of our site:

Project homepage

You can log in to the app with credentials user1@mailinator.com and password 123456. See LoadUserData fixture class for more details regarding generated users.

Starting from scratch

In this section, we'll describe how to set up a new project from scratch. Feel free to take a look at the sample app codebase and see the details.

Truncated by Planet PHP, read more at the original (another 6101 bytes)

Link
Evert PotScheduling posts on Github pages with AWS lambda functions (18.6.2018, 16:00 UTC)

If you are reading this post, it means it worked! I scheduled this post yesterday to automatically publish at 9am the next day, PDT.

I’ve been trying to find a solution for this a few times, but most recently realized that with the AWS Lamdba functions it might have finally become possible to do this without managing a whole server.

I got some inspiration from Alex Learns Programming, which made me realize Github has a simple API to trigger a new page build.

You need a few more things:

  1. Create a Personal Access Token on GitHub.
  2. Make sure you give it at least the repo and user privileges.
  3. Make sure you add future: false to your _config.yaml.
  4. Write a blog post, and set the date to some point in the future.
  5. Create an AWS Lambda function.

To automatically have a lamdba run on a specific schedule, you can use a ‘CloudWatch Event’.

Source

This is (most of the) code for the actual AWS lambda function:

const { TOKEN, USERNAME, REPO } = require('./config');
const fetch = require('node-fetch');

exports.handler = async (event) => {

  const url = 'https://api.github.com/repos/' + USERNAME + '/' + REPO + '/pages/builds';
  const result = await fetch(url, {
    method: 'POST',
    headers: {
      'Authorization': 'Token ' + TOKEN,
      'Accept': 'application/vnd.github.mister-fantastic-preview+json',
    }
  });

  if (!result.ok) {
    throw new Error('Failure to call github API. HTTP error code: ' + result.status);
  }

  console.log('Publish successful');
};

I released the full source on Github, it’s pretty universal. Just add your own configuration to config.js.

Cost

The Free Tier for AWS allows for 1,000,000 triggers per month, which is plenty (I’m triggering it every 15 minutes, which is less than 3000 triggers per month).

I configured it use 128M memory. The free tier includes 3,200,000 seconds per month at that memory limit. Since the script takes around 400ms to run, this is also more than plenty.

tl;dr: it’s free.

Conclusion

This was my first foray into AWS Lamdba Functions, and I was surpised how easy and fun it was.

Hope it’s useful to anyone else!

Link
Evert PotWebDAV features that might be useful for HTTP services. (15.6.2018, 06:03 UTC)

While WebDAV is no longer really used as the foundation for new HTTP services, the WebDAV standard introduced a number of features that are applicable to other types of HTTP services.

WebDAV comprises of many standards that each build on the HTTP protocol. Many of the features it adds to HTTP are not strictly tied to WebDAV and are useful in other contexts. And even though it ‘extends’ HTTP, it does so within the confines of the HTTP framework, so you can take advantage of them using standard HTTP clients and servers.

MOVE & COPY

WebDAV adds 2 HTTP methods for moving and copying resources. If you were to stick to typical HTTP/REST semantics, doing a move operation might imply you need several requests.

GET /source <-- retrieve a resource
PUT /destination <-- create the new resource
DELETE /source <-- remove the old one

One issue with this approach is that it’s not an atomic operation. In the middle of this process there is a short window where both the source and destination exist.

If an atomic move operation is required, a typical solution might be to create a POST request with a specific media-type for this, and this is a completely valid solution.

A POST request like that might look like this:

POST /source HTTP/1.1
Content-Type: application/vnd.move+json

{
  "destination": "/destination"
}

The WebDAV MOVE request looks like this:

MOVE /source HTTP/1.1
Destination: /destination

Both the MOVE and COPY request use the Destination header to tell the server where to copy/move to. The server is supposed to perform this operation as an atomicly, and it must either completely succeed or completely fail.

Using POST for this is completely valid. However, in my mind using a HTTP method with more specific semantic can be nice. This is not that different from using PATCH for partial updates. Anything that can be done with PATCH, could be done with POST, yet people tend to like the more specific meaning of a PATCH method to further convey intent.

Sometimes it’s required to do complex queries for information on a server. The standard way for retrieving information is with a GET request, but there’s times where it’s infeasable to embed the entire query in the url.

There’s more than one way to solve this problem. Here’s a few common ones:

  1. You can use POST instead. This is by far the most common, and by many considered the most pragmatic.
  2. Create a “report” resource, expose a separate “result” resource and fetch it with GET. This is considered a better RESTful solution because you still get a way to reference the result by its address.
  3. Supply a request body with your GET request. This is a really bad idea, and goes against many HTTP best practices, but some products like Elasticsearch do this.

If you are considering option #1 (POST) you’re opting out of one of the most useful features of HTTP/Rest services, which is the ability to address specific resources.

However, you are giving up another feature of GET. GET is considered a ‘safe’ and ‘idempotent’ r

Truncated by Planet PHP, read more at the original (another 6666 bytes)

Link
SitePoint PHPApache vs Nginx Performance: Optimization Techniques (13.6.2018, 18:00 UTC)

Some years ago, the Apache Foundation's web server, known simply as "Apache", was so ubiquitous that it became synonymous with the term "web server". Its daemon process on Linux systems has the name httpd (meaning simply http process) --- and comes preinstalled in major Linux distributions.

It was initially released in 1995, and, to quote Wikipedia, "it played a key role in the initial growth of the World Wide Web". It is still the most-used web server software according to W3techs. However, according to those reports which show some trends of the last decade and comparisons to other solutions, its market share is decreasing. The reports given by Netcraft and Builtwith differ a bit, but all agree on a trending decline of Apache's market share and the growth of Nginx.

Nginx --- pronounced engine x --- was released in 2004 by Igor Sysoev, with the explicit intent to outperform Apache. Nginx's website has an article worth reading which compares these two technologies. At first, it was mostly used as a supplement to Apache, mostly for serving static files, but it has been steadily growing, as it has been evolving to deal with the full spectrum of web server tasks.

It is often used as a reverse proxy, load balancer, and for HTTP caching. CDNs and video streaming providers use it to build their content delivery systems where performance is critical.

Apache has been around for a long time, and it has a big choice of modules. Managing Apache servers is known to be user-friendly. Dynamic module loading allows for different modules to be compiled and added to the Apache stack without recompiling the main server binary. Oftentimes, modules will be in Linux distro repositories, and after installing them through system package managers, they can be gracefully added to the stack with commands like a2enmod. This kind of flexibility has yet to be seen with Nginx. When we look at a guide for setting up Nginx for HTTP/2, modules are something Nginx needs to be built with --- configured for at build-time.

One other feature that has contributed to Apache's market rule is the .htaccess file. It is Apache's silver bullet, which made it a go-to solution for the shared hosting environments, as it allows controlling the server configuration on a directory level. Every directory on a server served by Apache can have its own .htaccess file.

Nginx not only has no equivalent solution, but discourages such usage due to performance hits.

Server share stats, by Netcraft

Server vendors market share 1995–2005. Data by Netcraft

LiteSpeed, or LSWS, is one server contender that has a level of flexibility that can compare to Apache, while not sacrificing performance. It supports Apache-style .htaccess, mod_security and mod_rewrite, and it's worth considering for shared setups. It was planned as a drop-in replacement for Apache, and it works with cPanel and Plesk. It's been supporting HTTP/2 since 2015.

LiteSpeed has three license tiers, OpenLiteSpeed, LSWS Standard and LSWS Enterprise. Standard and Enterprise come with an optional caching solution comparable to Varnish, LSCache, which is b

Truncated by Planet PHP, read more at the original (another 12130 bytes)

Link
Rob AllenUsing API Gateway with Serverless & OpenWhisk (13.6.2018, 10:02 UTC)

As with all serverless offerings OpenWhisk offers an API Gateway to provide HTTP routing to your serverless actions. This provides a number of advantages over web actions, the most significant of which are routing based on HTTP method, authentication and custom domains (in IBM Cloud).

Creating routes with the wsk CLI

To route to an action using API Gateway, you first need to make your action a web action first:

$ wsk action update todo-backend/listTodos listTodos.php --web raw

(You can also use --web true, if you want the automatic decoding of post bodies and query parameters.)

Now that we have a web action we can create the API route:

$ wsk api create /todos GET todo-backend/listTodos --apiname todo-backed --response-type http

This creates the HTTP endpoint which is a massively long URL that starts with https:// and ends with /todos.

Creating routes with Serverless

All these CLI commands are tedious and so clearly they need automating. One easy way to do this is to use Serverless Framework as I've discussed before.

To create an API Gateway endpoint for an action, we add an http event to the action definition in serverless.yml:

list-todos:
  handler: src/actions/listTodos.main
  name: "todo-backend/list-todos"
  events:
    - http:
        path: /todos
        method: get
        resp: http

The handler tells Serverless that the code for this action is in the main() function within listTodos.php and is required. We use the name parameter to set the package for this action (todo-backend.

Finally, to the list of events, we add a http section where we set following:

path The URL for the end point
method The HTTP method for this end point
resp The web action content type. Set this to http so that you can send back the status code and any custom headers

Sending the response from the action

If you use the http web action content type, you must return a dictionary with these three keys:

statusCode The status code (100-599)
headers A dictionary of headers
body The body content. If this is a dictionary, then it will be automatically converted to JSON for you.

For example, in listTodos.php, we might have this:

<php

/**
 * GET /todos
 */
function main(array $args) : array
{
    $container = new \App\AppContainer($args);
    $mapper = $container[\Todo\TodoMapper::class];

    $todos = $mapper->fetchAll();

    return [
        'statusCode' => 200,
        'headers' => [
            'X-Clacks-Overhead' => 'GNU Terry Pratchett,
        ]
        'body' => $todos->toArray(),
    ];
}

In this particular case, we don't really need to set the statusCode as the default is 200 anyway. We also don't need to set the Content-Type to application/json as this is also done for us as OpenWhisk has converted our array to JSON.

Fin

Adding API Gateway end points to an OpenWhisk API using Serverless is a few lines of configuration and then we can let the tool do the leg work of setting up OpenWhisk for us.

Link
SitePoint PHPMaking Your Website Faster and Safer with Cloudflare (12.6.2018, 18:00 UTC)

Cloudflare is an industry leader in the content-delivery space, reducing load and speeding up millions of websites.

What is peculiar about this provider is that it didn't start as speed-up/performance tool, but was instead born from Project Honeypot, which was conceived as a spam and hacking protection service. To this day, this is one of Cloudflare's major selling points: DDoS detection and protection. Their algorithms take note of visitors' IP addresses, payloads, resources requested, and request frequency to detect malicious visitors.

Because it sits as a proxy between websites and all incoming traffic, Cloudflare is able to reduce strain on servers significantly, so much so that DDoS attacks won't even reach the origin websites, as explained in this introduction. Cloudflare also provides the Always Online option, which caches a version of the user's website and serves a limited version of it in case of origin server outage --- when the original website returns 5xx or 4xx errors. It also features a full-fledged page cache.

These features can be a huge advantage: they can salvage a struggling web server under heavy load, and in case of server errors, can give some breathing room to developers to figure things out.

Always ></p> <p>It's also available free. There are premium tiers, of course, and there are things (like additional page rules) that require paying, but the scope of Cloudflare's free tier alone makes it worthwhile to learn its ins and outs.</p> <p><a href=Comparison benchmarks put Cloudflare somewhere in the middle in regard to speed, but it would be hard to argue that it is the best value CDN on the market.

Setting Up Cloudflare

Setting a site up with Cloudflare is very straightforward. After registering at (cloudflare.com)[http://www.cloudflare.com], we can add a new website. While the system scans for the given domain's IP and other details, we're offered an introductory video. Upon completion, we're given new nameservers to set up with our registrar.

Adding the website to Cloudflare

We need to register these nameservers with our registrar and wait for changes to propagate across the internet. It may take up to 24 hours.

This change means giving all control over our domain to Cloudflare. This also means that, if we have email on this domain (MX records), we need to transfer these records to Cloudflare. If we have any subdomains, they also need to be set up as respective A records in Cloudflare's dashboard.

All existing domain records set up with our domain registrar or hosting provider need to be moved/copied to Cloudflare.

Some managed hosting providers may simplify/automate this transition process even more.

Cloudflare DNS dashboard

For each of our domain records, we can decide to simply let all the traffic pass through directly to our servers --- which means we can set exceptions for certain subdomains --- or we can turn off all Cloudflare functionality --- for example, while we're making some changes on the website.

switching CDN on

Once we've set the domain up, that's basically all the work required outside of Cloudflare's dashboard. There's nothing more to do on the website itself, or the origin server. All further tuning is done on the Cloudflare website.

Setting up Encryption

An SSL certificate is part of the free plan on Cloudflare. There are four options for SSL setup, and we can find them under the Crypto tab in the dashboard.

  • OFF - this is self-explanatory. All traffic will be redirected to unsecured protocol (http)
  • FLEXIBLE - regardless of the protocol of our server, and whether we have an existing SSL certificate on it or not, Cloudflare will serve all our pages to end-visitors over https. Connections from Cloudflare to the o

Truncated by Planet PHP, read more at the original (another 2774 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP