PHP: Hypertext PreprocessorPHP 7.1.20 Released (20.7.2018, 00:00 UTC)
The PHP development team announces the immediate availability of PHP 7.1.20. This is a security release. Several security bugs have been fixed in this release. All PHP 7.1 users are encouraged to upgrade to this version. For source downloads of PHP 7.1.20 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
Link
Official Blog of the PEAR Group/PEAR PresidentSecurity Vulnerability Announcement: HTML_QuickForm (19.7.2018, 18:22 UTC)

A vulnerability in the HTML_QuickForm package has been found which potentially allows remote code execution.

A new release of the package is available which fixes this issue. One is strongly encouraged to upgrade to it by using:

$ pear upgrade HTML_QuickForm-3.2.15

Thanks to Patrick Fingle and the CiviCRM Security Team who reported this issue.

Link
PHP: Hypertext PreprocessorPHP 7.3.0alpha4 Released (19.7.2018, 00:00 UTC)
The PHP team is glad to announce the release of the fourth PHP 7.3.0 version, PHP 7.3.0alpha4. The rough outline of the PHP 7.3 release cycle is specified in the PHP Wiki. For source downloads of PHP 7.3.0alpha4 please visit the download page. Windows sources and binaries can be found on windows.php.net/qa/. Please carefully test this version and report any issues found in the bug reporting system. THIS IS A DEVELOPMENT PREVIEW - DO NOT USE IT IN PRODUCTION! For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release would be Beta 1, planned for August 2nd. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
Link
PHP: Hypertext PreprocessorPHP 5.6.37 Released (19.7.2018, 00:00 UTC)
The PHP development team announces the immediate availability of PHP 5.6.37. This is a security release. Several security bugs have been fixed in this release. All PHP 5.6 users are encouraged to upgrade to this version.For source downloads of PHP 5.6.37 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
Link
Matthew Weier O'PhinneyNotes on GraphQL (18.7.2018, 22:05 UTC)

The last week has been my first foray into GraphQL, using the GitHub GraphQL API endpoints. I now have OpinionsTM.

The promise is fantastic: query for everything you need, but nothing more. Get it all in one go.

But the reality is somewhat... different.

What I found was that you end up with a lot of garbage data structures that you then, on the client side, need to decipher and massage, unpacking edges, nodes, and whatnot. I ended up having to do almost a dozen array_column, array_map, and array_reduce operations on the returned data to get a structure I can actually use.

The final data I needed looked like this:

[
  {
    "name": "zendframework/zend-expressive",
    "tags": [
      {
        "name": "3.0.2",
        "date": "2018-04-10"
      }
    ]
  }
]

To fetch it, I needed a query like the following:

query showOrganizationInfo(
  $organization:String!
  $cursor:String!
) {
  organization(login:$organization) {
    repositories(first: 100, after: $cursor) {
      pageInfo {
        startCursor
        hasNextPage
        endCursor
      }
      nodes {
        nameWithOwner
        tags:refs(refPrefix: "refs/tags/", first: 100, orderBy:{field:TAG_COMMIT_DATE, direction:DESC}) {
          edges {
            tag: node {
              name
              target {
                ... on Commit {
                  pushedDate
                }
                ... on Tag {
                  tagger {
                    date
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Which gave me data like the following:

{
  "data": {
    "organization": {
      "repositories: {
        "pageInfo": {
          "startCursor": "...",
          "hasNextPage": true,
          "endCursor": "..."
        },
        "nodes": [
          {
            "nameWithOwner": "zendframework/zend-expressive",
            "tags": {
              "edges": [
                "tag": {
                  "name": "3.0.2",
                  "target": {
                    "tagger": {
                      "date": "2018-04-10"
                    }
                  }
                }
              ]
            }
          }
        ]
      }
    }
  }
}

How did I discover how to create the query? I'd like to say it was by reading the docs. I really would. But these gave me almost zero useful examples, particularly when it came to pagination, ordering results sets, or what those various "nodes" and "edges" bits were, or why they were necessary. (I eventually found the information, but it's still rather opaque as an end-user.)

Additionally, see that pageInfo bit? This brings me to my next point: pagination sucks, particularly if it's not at the top-level. You can only fetch 100 items at a time from any given node in the GitHub GraphQL API, which means pagination. And I have yet to find a client that will detect pagination data in results and auto-follow them. Additionally, the "after" property had to be something valid... but there were no examples of what a valid value would be. I had to resort to StackOverflow to find an example, and I still don't understand why it works.

I get why clients cannot unfurl pagination, as pagination data could appear anywhere in the query. However, it hit me hard, as I thought I had a complete set of data, only to discover around half of it was missing once I finally got the processing correct.

If any items further down the tree also require pagination, you're in for some real headaches, as you then have to fetch paginated sets depth-first.

So, while GraphQL promises fewer round trips and exactly the data you need, my experience so far is:

  • I end up having to be very careful about structuring my queries, paying huge attention to pagination potential, and often sending multiple queries ANYWAYS. A well-documented REST API is often far easier to understand and work with immediately.

  • I end up doing MORE work client-side to make the data I receive back USEFUL. This is because the payload structure is based on the query structure and the various permutations you need in order to get at the data you need. Again, a REST API usually has a single, well-documented payload, making consumption far easier.

I'm sure I'm

Truncated by Planet PHP, read more at the original (another 1995 bytes)

Link
Evert Pot202 Accepted (17.7.2018, 15:00 UTC)

202 Accepted, means that the server accepted the request, but it’s not sure yet if the request will be completed successfully.

The specification calls it ‘intentionally non-committal’. You might see APIs using this response for for example asynchronous batch processing. HTTP doesn’t have a standard way to communicate after a request if a request eventually succeeded. An API using this might use some other facility to later to do this.

For example, it might send an email to a user telling them that the batch process worked, or it might expose another endpoint in the API that can indicates the current status of a long-running process.

Example

POST /my-batch-process HTTP/1.1
Content-Type: application/json

...

HTTP/1.1 202 Accepted
Link: </batch-status/5545> rel="http://example.org/batch-status"
Content-Length: 0

Link
Matthias NobackObjects should be constructed in one go (17.7.2018, 07:50 UTC)

Consider the following rule:

When you create an object, it should be complete, consistent and valid in one go.

It is derived from the more general principle that it should not be possible for an object to exist in an inconsistent state. I think this is a very important rule, one that will gradually lead everyone from the swamps of those dreaded "anemic" domain models. However, the question still remains: what does all of this mean?

Well, for example, we should not be able to construct a Geolocation object with only a latitude:

final class Geolocation
{
    private $latitude;
    private $longitude;

    public function __construct()
    {
    }

    public function setLatitude(float $latitude): void
    {
        $this->latitude = $latitude;
    }

    public function setLongitude(float $longitude): void
    {
        $this->longitude = $longitude;
    }
}

$location = new Geolocation();
// $location is in invalid state!

$location->setLatitude(-20.0);
// $location is still in invalid state!

It shouldn't be possible to leave it in this state. It shouldn't even be possible to construct it with no data in the first place, because having a specific value for latitude and longitude is one of the core aspects of a geolocation. These values belong together, and a geolocation "can't live" without them. Basically, the whole concept of a geolocation would become meaningless if this were possible.

An object usually requires some data to fulfill a meaningful role. But it also poses certain limitations to what kind of data, and which specific subset of all possible values in the universe would be allowed. This is where, as part of the object design phase, you'll start looking for domain invariants. What do we know from the relevant domain that would help us define a meaningful model for the concept of a geolocation? Well, one of these things is that latitude and longitude should be within a certain range of values, i.e. -90 to 90 inclusive and -180 to 180 inclusive, respectively. It would definitely not make sense to allow any other value to be used. It would render all modelled behavior regarding geolocations useless.

Taking all of this into consideration, you may end up with a class that forms a sound model of the geolocation concept:

final class Geolocation
{
    private $latitude;
    private $longitude;

    public function __construct(
        float $latitude,
        float $longitude
    ) {
        Assertion::between($latitude, -90, 90);
        $this->latitude = $latitude;

        Assertion::between($longitude, -180, 180);
        $this->longitude = $longitude
    }
}

$location = new Geolocation(-20.0, 100.0);

This effectively protects geolocation's domain invariants, making it impossible to construct an invalid, incomplete or useless Geolocation object. Whenever you encounter such an object in your application, you can be sure that it's safe to use. No need to use a validator of some sorts to validate it first! This is why that rule about not allowing objects to exist in an inconsistent state is wonderful. My not-to-be-nuanced advice is to apply it everywhere.

An aggregate with child entities

The rule isn't without issues though. For example, I've been struggling to apply it to an aggregate with child entities, in particular, when I was working on modelling a so-called "purchase order". It's used to send to a supplier and ask for some goods (these "goods" are specific quantities of a certain product). The domain expert talks about this as "a header with lines", or "a document with lines". I decided to call the aggregate root "Purchase Order" (a class named PurchaseOrder) and to call the child entities representing the ordered goods "Lines" (in fact, every line is an instance of Line).

An important domain invariant to consider is: "every purchase order has at least one line". After all, it just doesn't make sense for an order to have no lines. When trying to apply this design rule, my first instinct was to provide the list of lines as a constructor argument. A simplified implementation (note that I don't use proper values objects in these examples!) would look like this:

final class PurchaseOrder
{
    private $lines;

    /**
     * @param Line[] $lines
     */
    public function __construct(array $lines)
    {
        Assertion::greaterThan(count($lines), 1,
            'A purchase order should have at least one line');

        $this->lines = $lines;
    }
}

final class Line
{
    private $lineNumber;
    private $productId;
    private $quantity;

    public function __construct(
        int $lineNumber,
        int $productId,
 

Truncated by Planet PHP, read more at the original (another 11698 bytes)

Link
Evert PotBye Disqus, hello Webmention! (16.7.2018, 16:00 UTC)

Since 2013 I’ve used Disqus on this website for comments. Over the years Disqus has been getting ‘fatter’, so I’ve been thinking of switching to something new.

Then on Friday, I saw a tweet which got me inspired:

<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"/>

This links to Nicolas Hoizey’s blog, in which he details moving his Github Pages-based blog from Disqus to Webmentions. I spent all day Saturday to do the same.

What are webmentions?

Webmentions is a W3C standard for distributed commenting. It’s very similar to “Pingbacks”. When somebody wants to respond to an article here from their own blog, a link will be created automatically.

I used the Webmention.io hosted service to do this. To receive webmentions, I just needed to embed the following in the <head> of this site:

<link rel="pingback" href="https://webmention.io/evertpot.com/xmlrpc" />
<link rel="webmention" href="https://webmention.io/evertpot.com/webmention" />

The webmention.io site has a simple API, with open CORS headers. I wrote a custom script to get the webmentions embedded in this blog. source is on github.

Importing old comments

I exported the disqus comments, and wrote a script to convert them into JSON. The source for the exporter is reusable. I also put it on github if anyone finds it useful.

The last time I switch blogging systems I used Habari, but also never got around importing comments. I took the time to import those as well, so now comments all the way from 2006 are back!

Jekyll has a ‘data files’ feature, which allows me to just drop the json file in a _data directory, and with a recursive liquid include I can show comments and threads:

<script src="https://gist.github.com/evert/409f5effca5e7fe706bd1c3aad13af9d.js"/>

Unfortunately Disqus has no means to get an email address, url or avatar from the export, so all Disqus comments are now just show up as a boring name, as can be seen here.

If you ever commented on this site with Disqus, and want to show up with a url and/or avatar find yourself in the comment archive on github and send me a PR, or just tell me!

Getting tweets and likes from twitter

To get mentions from social media, like Twitter, I’m using Bridgy. This is a free service that listens for responses to tweets and converts them to Web mentions.

It also supports other networks, but Twitter is the only one I have setup. To see it in action, you can see a bunch of twitter responses right below this article.

What’s missing?

It’s not easy currently to discover on this site that Webmentions are possible, and it’s it’s not possible to leave a regular comment anymore. I hope I can fix both of these in the future. I think the result is that the barrier to entry has become very high, and I’d like to see if it’s possible for me to reduce that again. How would you go about it?

Webmention.io does not have good spam protection. Spam was a major issue with pingbacks, and is pretty much why pingbacks died. Webmention is not big enough for th

Truncated by Planet PHP, read more at the original (another 539 bytes)

Link
Evert Pot201 Created (10.7.2018, 15:00 UTC)

201 Created, just like 200 OK, means that the request was successful, but it also resulted in a new resource being created.

In the case of a PUT request, it means that a new resource was created on the actual url that was specified in the request.

Example

PUT /new-resource HTTP/1.1
Content-Type: text/html
Host: example.org

...

HTTP/1.1 201 Created
ETag: "foo-bar"

POST requests

If you got a 201 in response to a POST request, it means that a new resource was created at a different endpoint. For those cases, a Location header must be included to indicate where the new resource lives.

In the following example, we’re creating a new resource via POST, and the server responds with the new location and the ETag of the new resource.

POST /collection/add-member HTTP/1.1
Content-Type: application/json
Host: example.org

{ "foo": "bar" }

HTTP/1.1 201 Created
ETag: "gir-zim"
Location: /collection/546

It’s common misconception that POST is generally for creating new resources, and PUT is strictly for updating them. However, the real difference is that PUT should be the preferred method if the client can determine the url of the resource it wants to create.

In practice most servers do want control over the url, perhaps because it’s tied to an auto-incrementing database id.

Link
Paul M. JonesAtlas.Orm 3.0 (“Cassini”) Now Stable (10.7.2018, 14:36 UTC)

I am delighted to announce the immediate availability of Atlas.Orm 3.0 (“Cassini”), the flagship package in the Atlas database framework. Installation is as easy as composer require atlas/orm ~3.0.

Atlas.Orm helps you build and work with a model of your persistence layer (i.e., tables and rows) while providing a path to refactor towards a richer domain model as needed. You can read more about Atlas at the newly-updated project site, and you can find extensive background information in these blog posts:

If you want a data-mapper alternative to Doctrine, especially for your pre-existing table structures, then Atlas is for you!


Read the Reddit commentary on this post here.

Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP