E2E testing that requires WordPress instances

Description

The goal is to create a testing system that can:

  • Spin up a WordPress instance inside of a Docker container

  • Create custom WP instances with but not limited to:

    • With fixed or latest version of WordPress
    • With fixed or latest version of any WP plugin
    • With any configuration (database)
    • With media uploaded
  • The configuration to spin up the instance is in the same codebase as the tests

    • Be able to test that with Cypress
    • Be able to test that with Jest
    • Be able to run tests in parallel

Examples

Implementation Proposal

Components

1. The WP instance (WI)

We want to have a testing system that can run an instance of WP with any kind of custom configuration. Let’s refer to this WordPress instance as WI hereafter. The WI can have the following configuration:

  • Fixed or latest version of WordPress
  • Fixed or latest version of any WP plugin (+ any configuration of that plugin)
  • Any configuration (database)
  • Any custom WP configuration that can be done through the WP CLI
  • Any media uploaded
  • Custom post contents (with embeds or any kind of weird content)

This WP instance should run inside of a docker container.

2. Frontity app

Additionally, we also want to be able to build a frontity app with any combination of flags and values for those flags. The flags are at the moment only:

  • --target = es5 | module
  • publicPath (local path or url)

but we expect more flags in the future. This feature of the testing system should replace the current hack of passing the values directly in the npm scripts in the main package.json in the root of the frontity repo :sweat_smile:

Then, this frontity app should connect to the WI.

Finally, we should be able to run e2e tests against the frontity app with cypress.

Two Separate systems

First, it seems that we will need two separate systems for e2e tests. One for the frontity repo and another one for the wp-plugins repo. They are going to be largely similar (and could re-use most of the code/configuration) but in practice they will work slightly differently.

In the wp-plugins repo:

  • We have to be able to build each plugin from the source and run the e2e/unit tests against those built plugins both locally and in a GH action. That means that we want to launch the WI with the latest version of some plugin, built from source, at the HEAD of the repository.

In the frontity repo:

  • We only run the WI with the version of the plugin that is already published.
  • Alternative: We might want to publish all the frontity plugins as a zipfiles (on github) whenever a pull request is merged into dev in the wp-plugins repo. This way, they could be installable from a URL like: https://github.com/frontity/wp-plugins/blob/dev/dist/head-tags.zip

This distinction is mostly because of practical reasons - I think it’s going to be very hard to be able to test any version of frontity against any version of any plugin when they are in separate repositories, unless we bring the wp-plugins into the monorepo.

:loud_sound: This is up for discussion still. Is this sufficient…?

Running locally vs. running in github actions

Because github actions do not run locally (although some try) we need a (slightly) different way of running the tests depending on whether we’re on GH actions or localhost, although I think that typically those tests will (and should) just be run on the CI.

1. frontity on github actions

We are quite lucky because it turns out that we can just use the base github action for cypress to define the steps that we need in order to run the e2e tests.

Something like:

name: docker-e2e-test

on: [pull_request]

jobs:
  run-e2e:
    strategy:
      matrix:
        targets: ["module"]
        publicPath: ["/custom/path"]
        wordpressVersion: ["5.0"]

    runs-on: ubuntu-latest
      steps:
      
      ### some steps here omitted for brevity:
      
      - name: Substitute variables
        uses: microsoft/variable-substitution@v1 
        with:
          files: docker-compose.yaml
        env:
          wordpress_version: ${{ matrix.wordpressVersion }}
      
      - name: Run e2e tests
        uses: cypress-io/github-action@v1.21.0
        with:
          build: ./scripts/build-e2e-docker.sh ${{ matrix.publicPath }} ${{ matrix.target }}
          start: node scripts/start-e2e-docker.js && npx frontity serve --port 3001
          wait-on: "http://localhost:3001"
          wait-on-timeout: 180
          env: HEADLESS=true
          working-directory: e2e
          config: video=true,screenshotOnRunFailure=true

Note that in the above file:

  1. We use https://github.com/microsoft/variable-substitution to create multiple runs for each version of WP. :loud_sound:It’s debatable whether that’s the best approach, more info below
  2. We call a separate script in the build command of the cypress github action because the cypress action does not seem to allow multiple chained commands like cd e2e/projects && npx frontity build

2. frontity on localhost

On localhost, we will need to:

  1. Either build the frontity application or run it in the dev mode
  2. Manually run node scripts/start-e2e-docker.sh to start WI with docker (this requires docker to be installed locally)
  3. Run the test suite that you want: e2e or unit tests

:blue_book:Side note about scripts

This is not an essential part of the proposal, but I suggest that we create a few “scripts” in a /scripts directory at the root of the repo. I will assume the presence of those scripts from now on. At a minimum we’ll need a script like build-e2e-docker.sh and another one to start-e2e-docker.sh

##  /scripts/start-e2e-docker.sh
##  This is the script that is referred to in docker-e2e-test.yaml

# bring up the containers
docker-compose up -d

# Wait until server is ready.
npx wait-on http-get://localhost:8080

# Initiate the default database
docker-compose exec -T wpcli wp core install \
  --url=example.com \
  --title=Example \
  --admin_user=admin \
  --admin_password=password \
  --admin_email=info@example.com \

# Change permissions in the wp-content folder because otherwise we
# cannot install plugins or load media because we don't have permission
docker-compose exec -T /bin/bash -c \ 
  chown -R www-data:www-data /var/www/html/wp-content/plugins/ && \
  chown -R www-data:www-data /var/www/html/wp-content/ && \
  chmod 775 /var/www/html/wp-content/plugins && \
  chmod 775 /var/www/html/wp-content/

#  Install some default plugins.
#  This might not be necessary actually, because we can install the plugins later on, right before running the tests.

#  We need the `--user 33:33` because of https://hub.docker.com/_/wordpress/ 
#  See: "Running as arbitrary user"
#  `wpcli` is the reference to the docker volume (we have `wp`, `wpcli`, and `msql`)
docker-compose exec -T \
--user 33:33 \
wpcli wp plugin install $1 --activate

:blue_book:Side note: In this proposal the above is a bash script, but it will probably be implemented as a js script to have better control.

Configuring the WI

Once we have started the WI, we need to configure it for each test case. It should be possible to run any arbitrary commands to configure the WI from the test code using something like (using https://github.com/sindresorhus/execa):

const execa = require("execa");

// install a plugin
execa.command("docker-compose run --user 33:33 wpcli wp plugin install yoast");

execa.command(`docker-compose exec -T wpcli wp post create \
./post-content.txt \
--post_category=holiday \
--post_title='Post from file'`)

Cypress tests

For any cypress tests that require a particular plugin or database configuration, we can use the “task” API from cypress to run server-side code: https://docs.cypress.io/api/commands/task.html

Jest tests

When running jest tests, we can just run the above commands for installing plugins or creating posts, etc. directly in the jest unit test files.

The jest tests can be defined in the same way as any other jest tests, just that now they can make use of the WI (at http://localhost:3001 by default)

:warning:Note that each github actions step runs in a separate process, so the WI setup must be run in the same step as npm run test:ci

Clearing tests

:warning:Because we are running just one instance of WP (the WI) it’s basically a big chunk of global state. Care needs to be taken in order to reset the DB and clean any changes in between the tests. For example, we can use the WP CLI to deactivate all plugins with something like: wp plugin deactivate hello, etc.

Testing frontity WP plugins

As mentioned earlier in the propsal, we need to test the wp-plugins separetely from the tests that will run in the main frontity repo.

Those tests can re-use the same workflow as we are using in the frontity repo.

The one extra necessary step is to symlink the plugins in the repo to the location on the docker volume with something like:

services:
  wp:
    image: wordpress:5.0
    ports:
      - 8080:80
    volumes:
      - ./plugins/:/var/www/html/wp-content/plugins
## rest of the file omitted

The main question is whether we want to run the full e2e tests in the wp-plugins repo or only test REST API responses (jest snapshots)

Open questions:

1. Shall we keep the current e2e test suite and workflow intact and create a NEW suite of e2e tests that are going to be used exclusively with the WI ?

The alternative would be to modify the way that we run the e2e tests right now:

  • Refactor the e2e tests to point to http://localhost:3001
  • Update the github workflow to run the e2e tests and start the WI in the same step

2. Should we create a new docker environment with docker-compose for each new test? Or can we re-use the services and only reset the database / plugins for each new test case ?

Running a completely new docker-compose up for each test case separately:

  • pros
    • completely separating the environments. no need to worry about cleaning the DB / deactivating the plugins
  • cons
    • on my machine it takes about 30 secs to run the start-e2e-docker script (and that’s not taking into account the time it takes to pull the docker image from the registry). So running this for every single test case might be too slow.
    • might be a bit harder to configure

3. In the wp-plugins repository, should we run the full e2e test suite with cypress using a frontity app or just test REST API responses?

Some plugins have need both a WP plugin and a frontity package (e.g. head-tags). Perhaps we might want a e2e cypress test in such cases. The obvious downside is that it’s more complex to set up.

4. I had an idea that perhaps we could do jest snapshot tests of the mysql dumps? Not yet sure if this is going to be useful, but just mentioning it in case we do.

Reference

There is a PR with some testing related to the current Proposal: https://github.com/frontity/frontity/pull/450

Great work @michalczaplinsky :clap:

My feedback:

  1. I like the idea of the common scripts folder but I’d move them to the e2e folder for clarity. Actually, I’d move everything related to e2e tests to that folder.

  2. I’d create separate “suites” of e2e tests in separate folders, each own with its database/uploads and the bootstraping sequence in a JavaScript file.

Something like this:

/e2e/
  /scripts # Common scripts used by the bootstrap.js files.
    start-e2e-docker.js
    ..

  /some-test-suite
    /data
      db.sql # The WordPress database for this suite.
      /uploads # The uploads folder for this suite.

    bootstrap.js # The bootstrapping sequence for this suite.

    /project # The Frontity project for this suite.
      frontity.settings.js
      ..

    /tests # The Jest tests for this suite (if any).

    /integration # The Cypress tests for this suite (if any).
    ..

Then, I’d setup npm scripts to run the tests:

> cd e2e/some-test-suite
> npm run test
# This starts a JavaScript file with the bootstrapping of this suite:
# - Run docker-compose
#   - Mount volume for e2e/some-test-suite/uploads
#   - Install DB from e2e/some-test-suite/data/db.sql
# - Install required WP plugins using WP-CLI
# - Wait for port 8080
# - Start Frontity project in production mode, with some flags
# - Wait for port 3000
# - Run Jest, or Cypress or both, whatever is in the bootstrapping file.

I’d also add another commands that run the same bootstrapping, but end up with different testing tool to ease local development of the suites:

> cd e2e/some-test-suite
> npm run cypress:open
# - Run same bootstrapping but the Frontity project starts in development
# - Run Cypress in "Open" mode.

> cd e2e/some-test-suite
> npm run jest:watch
# - Run same bootstrapping but the Frontity project starts in development
# - Run Jest in "watch" mode.

I’d also add a script to dump the database after changes:

# Login in the WordPress Instance (http://localhost:8080/wp-admin) and do the
# required changes, like changing configuration or creating posts.
> cd e2e/some-test-suite
> npm run dump
# - Saves the new database in the data/db.sql folder.

The only thing that is not clear to me is how to do the bootstrapping of tests that may need more than one way of configuration without duplicating the folder.

For example, imagine we need to test a suite with both:

  • The latest version of WordPress.
  • The 5.0 version of WordPress.
  • Frontity project without a publichPath.
  • Frontity project with a publichPath param.

I guess we’d need some time of “matrix” capability in the bootstrapping files, where they get the matrix and run the bootstrapping for each combination.

  1. I’d move the current e2e tests to a “suite”. They can be part of the same suite because they don’t need different bootstrapping sequences. The fewer systems we need to maintain/teach the better.

  2. I’d use a different WordPress Instance for each suite of tests. I think it’s going to be much simpler to maintain than having to clear everything after each test.

If it takes 30 secs, it’s not that much. For GitHub actions, we can easily cache the docker images with https://github.com/actions/cache. Each repository now has 5Gb so it’s more than enough.

I guess we’d need to clean things from time to time. For example, something that appears in the uploads folder after running the tests, but it’s not meant to be there when the tests start. But it shouldn’t be too much.

  1. To simplify the system, I’d try to reuse the same bootstrapping scripts in the GitHub actions. That way we don’t need to maintain/teach two different things.

  2. I’d run all the suites in parallel. If we run them in serial, the time it takes to run all the suites will increase with each suite.

If we use the folder-per-suite approach we could use a GitHub action workflow similar to this:

jobs:
  run-e2e:
    strategy:
      matrix:
        suite_folder:
          ["some-test-suite", "some-other-suite", "yet-another-suite"]

    steps:
      - name: Checkout
        uses: actions/checkout@v1

      - name: Setup npm cache
        uses: actions/cache@v1

      - name: Install dependencies
        run: npm ci

      - name: Run e2e tests
        run: cd e2e/{{matrix.suite_folder}} && npm run test
  1. To simplify the system, I’d try to reuse the exact same system in the wp-plugins repository (except for the different docker-compose volumes for the Frontity plugins, of course). Also, Cypress can be useful to test the plugin UIs.

Finally, I’d love to reuse the exact same system for the CLI e2e tests. Maybe we can do so :slightly_smiling_face: I think @michalczaplinsky’s JavaScript bootstrapping file idea makes this system powerful enough to acommodate for those types of tests as well. But we can see that after we have finished this part.

Thanks for the feedback @luisherranz !

:+1: for 1. and 2. In the PR the files and folders are a bit of a mess right now, but that was my idea as well. Same for adding extra scripts to ease local development.

Let’s start by noticing that we have 3 different levels of configuration for our e2e tests:

  1. the build phase - we can build the frontity app with different --publicPaths or -targets
  2. the docker image phase - we might need different versions of Wordpress or MySQL and those only come on different docker images. So we have to be able to tell docker-compose to pull different images from the registry.
  3. the service phase - this is all the remaining configuration that we can do inside of the docker service using the WP CLI: install a plugin, add media, create a post or even load some data into the DB.

So, with that in mind I think that according to your suggestion we would need a suite for the combination of the parameters in 1. and 2. I think that some of the parameters from 3. can be specified in the test case itself but some should be specified separately for each test suite. I guess the plugins should be configured for each test suite separately so we don’t have to deal with activating / deactivating them after each test.

This would then imply the structure like:

/e2e
  /moduleTargetSuite
    bootstrap.js
    /data
    /project
    /tests
    /integration 
  /es5TargetWPSuite
  /specificWPVersionSpecificPluginVersionSuite
  /specificWPVersionSpecificPluginVersionAndAnotherPluginSuite

etc, etc.

This way we can parameterize the test runs like you mentioned:

 strategy:
      matrix:
        suite_folder:
          ["es5TargetSuite", "specificWPVersionSuite", "moduleTargetSuite"]
   ....

However, the only issue I see with this is that we might have to end with with a multitude of “suites”. Let’s say that we have a test a specific plugin against 4 versions of wordpress - we’d have to create 4 test suites for this.

However, if we are going to spin up multiple WI anyway, maybe we could take a slightly different approach:

  1. Have only one e2e directory
  2. Run the “bootstrap” script in the beforeEach() or beforeAll() of the test script.
  3. The bootstrap script can then include the full configuration including building the frontity project, running docker compose up, etc. So it’s gonna be something like what Luis mentioned before:
// bootstrap.js
# This starts a JavaScript file with the bootstrapping of this suite:
# - Build the frontity app
# - Start Frontity app
# - Run docker-compose
#   - Mount volume for e2e/some-test-suite/uploads
#   - Install DB from e2e/some-test-suite/data/db.sql
# - Install required WP plugins using WP-CLI
# - Wait for port 8080
# - Wait for port 3000
# - Run Jest, or Cypress or both, whatever is in the bootstrapping file.
  1. In the workflow file, we can parameterize the test runs with the matrix so that we start a separate container for each testfile like:
 strategy:
      matrix:
        testfile:
          ["some-testfile", "another-testfile", "yet-another-testfile"]
  
      - name: Run e2e tests
        run: cd e2e/ && npm run test ${{matrix.testfile}}

the npm run test command in the last step is going to pass the filename as a filter to jest like jest test/another-testfile --coverage --ci so that it will start a test run only for those testfiles that match the name.

The only problem that I see with this is that if we’re only going to have one frontity project for all the tests so if we have a test case in the future that requires parameterizing that, it’s going to be more difficult.

But the benefit of doing it this way would be that:

  • There aren’t multiple largely duplicated folders with different “suites”
  • If we’re in a situation that we want to run some tests for a specific plugin and we want to test it with a specific configuration we don’t need to create a whole new suite to do that. We can just add a new testfile with the updated bootstrap and update the workfile matrix so that githhub will run that in parallel.

You’re right that one folder with everything is not going to be the most optimized approach because we’re going to end up with a lot of separate suites and we’ll hit the GH actions limit for concurrent containers at some point. Also, it’s not optimal when you need to run the same test suite for different WordPress/Frontity configurations.

I’ve made a video to try to summarize this:

Drawing: https://excalidraw.com/#json=5640718332723200,D6HyhNiq1u6YMTFJotHr0A

Is it clear or would you add something else?

Also, @michalczaplinksi, now that the concepts are more clear why don’t you work on a glossary so we can better communicate and then outline a first draft of the system we should try to implement first?

YES, This is exactly what I was talking about :smile: I think you explained it a bit better with the drawing.

The very very core is what is the best way to combine all the different parameters so that we can optimize both:

Run the smallest amount of WI possible on github actions

but also

Have the flexibility to test any combination of parameters (down to the level of the individual test)

Also, @michalczaplinksi, now that the concepts are more clear why don’t you work on a glossary so we can better communicate and then outline a first draft of the system we should try to implement first?

yes, I’m on it.

Also, I think I have an idea how we could solve the above problem that I hope is not too over-engineered. In essence we could have a system that does some “pre-processing” of the test files. In a similar way that the facebook’s relay compiler pre-processes the graphql queries in individual .js files (e.g. gatsby takes advantage of that).

Maybe we could create a similar system, where we can still specify the exact bootstrapping needed at the level of the individual test case. Then, the pre-processor could collect that information from each test case before running any tests and:

  • build the minimal set of fronitity apps for each of the params for build
  • start the minimal set of WI in parallel based on the bootstrap with some params

This way, we get the flexibility, it would be easy to write a new test case and we don’t have to create new suites if we add a new test case.

I ll mull it over it the weekend and I ll also explain with more details :slightly_smiling_face:

1. Glossary:

  • parameter: Any kind of variable that is configurable for an e2e test. E.g:

    • Fixed or latest version of WordPress
    • Fixed or latest version of any WP plugin (+ any configuration of that plugin)
    • Any configuration (database)
    • Any custom WP configuration that can be done through the WP CLI
    • Any media uploaded
    • Custom post contents (with embeds or any kind of weird content)
  • phase: Each parameter belongs to one of 3 phases: build & serve, docker-image, or data/plugins. They are explained in details below.

  • workflow: The github action workflow that is used to run the e2e tests. It will be defined in a workflow file called something like docker-e2e-wp-test.yml

  • container: A container refers to a docker container. In the context of the e2e tests, a new container will be used for each unique combination of parameters.

  • WI: A WordPress instance that is used for testing. Each container has its own WordPress instance. This WP instance can have any kind of configuration as defined by the parameters.

  • test case: It’s the single test that is going to be written using jest or cypress syntax and run with either one or the other tool respectively. It is defined using the test() or describe() function.

2. There are 3 types of phases

We can be we call them phases because each of the types is executed at a different time.

  1. the build and serve phase - we can build the frontity app with different --publicPath s or -target s
  2. the docker image phase - we might need different versions of Wordpress or MySQL and those only come on different docker images. So we have to be able to tell docker-compose to pull different images from the registry.
  3. the data/plugins phase - this is all the remaining configuration that we can do inside of the docker container using the WP CLI: install a plugin, add media, create a post or even load some data into the DB.

3. The key problem that we are facing is:

  • Run the smallest amount of containers possible (only one and not more for each unique combination of parameters)
  • At the same time be able to specify unique parameters for each test
  • Somehow, be able to specify how many containers and with what parameters to launch “ahead of time”. Github actions do not allow us to create containers dynamically - they have to be hardcoded into the workflow file in the matrix parameter !

So, this means that each test case can use different parameters (for each phase). But we don’t want to spin a new container for every test case, we want to do that only if some parameter is different.

Example:

  • We have 5 test cases that use the same build & serve and docker image phases.
    • The first 2 need the head-tags plugin to be active
    • The last 3 need the head-tags plugin to be inactive

We don’t want to spin 5 containers because we only need 2 of them.

But how do we communicate to the workflow those requirements that are defined for an individual test case so that we can launch a minimal set of containers?

Github actions require us to hardcode the “matrix” of possible container types in the workflow file… I’m going to explain further, hold on :slightly_smiling_face:

Proposal

For the “build and serve” phase:

First, I need to note that in order to make the workflow work we need to add a flag for the build command to use a different output directory than ./build. So, when running npx frontity build, the files are built into another directory. This should be a trivial change, in fact looking at the code for the build command it has already been planned.

Analogously we also need to change the serve command to have a flag to look for the build files in another directory. Likewise, a simple change.

Because we need different builds for different test cases, I think we have 2 options here:

option 1 - build a separate frontity app for each test case

This is a bit wasteful, but because all tests can run in parallel (more on that later) the time complexity is still basically O(1) for this. Then the app can just built and served without any extra steps. For any node unit tests this should be simple by using the programmatic API of build and serve, however in the cypress tests I think that we’ll have to use tasks because we’re not allowed to run server-side code in the test case directly.

import { build } from '@frontity/core';

let hash; // this is gonna be the build folder's name

beforeEach(async () => {
  hash = getUniqueHash(); // sth like `940utgh8v923q4r`
  const await build({ 'production', 'module', '/some/publicPath/', hash });
  await serve(hash);
})

option 2 - build all the different possible versions of the app ahead of time

For each possible combination of build & serve parameters, we can build the app and put each of the builds in a unique build folder and then run each one on a separate port, like:

target publicPath port
default default 3000
es5 default 3001
default /some/path 3002

etc.

I think this mapping of parameters ==> port number can be fixed and we can rely on a convention inside the tests to know which app we connect to. That is to say that e.g. port 3001 will always and only run the frontity app with es5 target and default publicPath (according to the table above).

This way, let’s say that we have a cypress test like:

it("should show the thing on click", () => {
    cy.visit("http://localhost:3001/?thing-on-click");
    cy.get("#thing").click();
    cy.get("#other-thing").should("exist");
  });

The fact that we are accessing localhost on port 3001 tells us that we are accessing frontity app with es5 target and default publicPath .

For the docker image and data/plugins phase:

I propose that we divide the e2e workflow into two separate jobs:

  1. pre-processing job
  2. test job

The pre-processing job will be responsible for literally “pre-processing” the test files in order to figure out the minimal set of containers to launch. I’m still a little bit fuzzy on all the details but I believe this can be done. More details in the next section further down below :slightly_smiling_face:

The “actual” test job will run the e2e tests, basically do all the things that we expect from a test, etc.

Remember when I mentioned this:

Github actions require us to hardcode the “matrix” of possible container types in the workflow file… I’m going to explain further, hold on :slightly_smiling_face:

However, github actions allow passing information from one workflow job to another, including outputting information that can be used as a matrix for another job. We can use the “output” feature of github actions to collect the information in the pre-processing job to create a matrix of containers for the test job (example):

### e2e-docker-wp-test.yml

name: e2e-docker-wp-test
jobs:
  pre-process:
    runs-on: ubuntu-latest
    outputs:
      matrix: ${{ steps.set-matrix.outputs.matrix }}
    steps:
    - id: set-matrix
      run: npm run pre-process | "::set-output name=matrix::{toJSON{{output}}}"
      // sth like that - it's not the exact syntax 
      // The `run-process` npm script should return the final matrix to standard output
  test:
    needs: pre-process
    runs-on: ubuntu-latest
    strategy:
      // this is the matrix of all the containers necessary for the test job
      matrix: ${{fromJson(needs.job1.outputs.matrix)}}
    steps:
    - run: npm run test

Connecting the WI to the frontity apps

Right now you might observe that we have built and served the frontity applications but do not yet have the WI. This poses a problem:

How do we connect the frontity apps to the WI? The frontity apps are built and served before we know what the parameters of the WI are. Specifically, we do not know which WI each frontity app should connect to for a specific test case! This can be summarised as:

I think I ll need some input for how to make this work. My initial idea was that the server could expose some kind of API for changing the WordPress REST API but I’m guessing that this is terrible from a security perspective. Perhaps the same could be accomplished with an environment variable on “per request” basis. I’m not entirely sure.

How is the actual pre-processing job going to work?

I’m not 100% sure on all the details but let’s review the types of parameters that we will have to “pre-process” and in what format they come:

1. Fixed or latest version of WordPress

For this, we simply have to specify a version as a string like: "5.1" or "4.8".

2. Any configuration (database)

The database configuration can be stored in a uniquely named folder. This folder would store the SQL file with the database config. So, what we would need here is a path to <folder>/data.sql

3. Fixed or latest version of any WP plugin (+ any configuration of that plugin)

Same as 2. This will be defined in the data.sql database dump.

4. Any custom WP configuration that can be done through the WP CLI

This configuration is a string that is basically a bash command (or a list of commands)

5. Any media uploaded

Same as 2. This is a path to a folder that contains all the media.

6. Custom post contents (with embeds or any kind of weird content)

Same as 2.

The main thing to observe is that each of the 1-6 parameters is “hashable”. This means that we can compute a hash for the combination of all parameters for each test case and launch one container for each unique hash.

I’m not sure what is the best way to “run” the pre-processing but my best idea was to use a babel plugin.

This way, we could put the configuration inside of the test case as a “magic comment” inside of that test case and hash the contents of that magic comment.

Actually running the tests

Assuming that we have now launched the frontity apps and the WIs, we have to run the all and only tests that should be run for that particular frontity app and WI.

I think that this can be accomplished with the same mechanism that I mentioned earlier that creates the "matrix" of containers. The npm run pre-process script can return the names of the test cases that could then be passed as parameters to jest or cypress inside of the workflow file.

Using built-in github services instead of custom

We should make use of the the built-in github service containers instead of running docker-compose ourselves. This should let us avoid some of the overhead of launching containers ourselves.

1 Like

I like the idea of running all the possible versions of the app using different ports. If that is the case, why don’t we do the opposite: run the WI’s first? After all, they are the thing we want to optimize for.

I would try to avoid that for the first version. We can manually link each test-case to a WI.


Two additional comments:

  • Could you please add test-case to the glossary?
  • The database also contains all the post content and plugin configuration, so there’s no need for extra steps for those, the data.sql is enough.

For the sake of documenting the progress, this is the document that we came up with during that outlines the PoC architecture for the tests:

This is the last meeting we had where Michal explained the first implementation of the new system

This is the summary of the current status:

We have decided that the best way to connect the WP instances with the frontity apps and run the individual test cases is to hardcode this relationship in the workflow file. This strategy is dead simple and can in the future be automatized at build time / at commit with husky.

The rough outline of the files involved:

github workflow file:

name: docker-compose-e2e

on: [pull_request]

jobs:
  run-e2e:
    strategy:
      matrix:
       # the instance name followed by colon and comma-separated names of test suites
        instances: [ "instance-1:wp-test.spec.js", 
                     "instance-2:other-test.spec.js,extra-test.spec.js,another-test.spec.js" ]

    # ... do more initialization steps
    
      - name: Run e2e tests
        run: npm run e2e-wp-test ${{ matrix.instances }}

test script (in pseudocode, in reality this will be a js file):

// will read standard input which is a string like:
//  "instance1:test-spec.js-1.js,test-spec-3.js"

var { $instance, $testSuites } = parse(standard input)

cd $instance
bash bootstrap.js #start WP and return when ready

// build frontity project
cd ../project
npx frontity build
npx frontity serve --port 3001

cd ..

// prepend the file names with "./integration"
const fileNames = createTestFileNames($testSuites);

cypress run --spec fileNames

before each test case we’ll have to run clean the DB and probably install some WP plugins or run some custom JS on the server. For this we can use the cy.task API from cypress:

describe("WP test", () => {
  beforeEach(() => {
    await cy.task('replaceDB');
    cy.visit("http://localhost:3001?name=e2e-wp-test");
  });

  it("should load", () => {
    cy.task('install some plugin');
  });
});

the tasks are defined in cypress plugins file:

// e2e-wp/plugins/index.js
const execa = require('execa');

module.exports = (on, config) => {
  on('task', {
    replaceDB: () => {
      execa('docker-compose run --user 33:33 wpcli wp etc...')
    },
    installPlugin: ({name}) => {
      execa(`docker-compose run --user 33:33 wpcli wp plugin install ${name}`)
    }
  })
}

The directory structure is roughly:

├── cypress.json
├── e2e-wp.js    # This is the main "bootstrap" file t
                 # that orchestrates docker & frontity build/serve
├── instance-1    # There will be one folder per instance, we can re-use instances
│   ├── data
│   │   ├── db.sql
│   │   └── uploads
│   ├── docker-compose.yml
│   └── start-e2e-docker.js  # This file will be called once per container.
                             # it contains all the WP instance specific configuration
├── integration    # All the test files can be placed here
│   └── wp-test.spec.js   
├── package.json
├── packages
│   └── wp-test
│       ├── package.json
│       ├── src
│       │   └── index.tsx
│       └── types.ts
├── plugins
│   └── index.js
└── project   # A frontity multi-site project
    ├── favicon.ico
    ├── frontity.settings.ts  
    └── package.json

Notes:

  • Currently you can only run one instance of WP at a time when running locally. I think we should have a separate script that launches all the instances locally with https://www.npmjs.com/package/concurrently
  • We need to make sure that the WP instance has the correct permalink structure, because the default docker WP image does not have the “pretty” permalinks which prevents access to the REST API on the /wp-json/ route.
  • We still need to add a script to dump the database contents. This will be useful when manually making changes to an instance and later re-using it’s configuration.
  • I think we should add an extra step in the CI for the wp-plugins repo that zips each of the frontity plugins and puts them in a /dist folder in the repo. The reason is that if we want to install the latest version of some frontity plugin, we either have to download the wp-plugins/plugins/plugin-name folder somehow and put it in the wp-content (github doesnt support it) or install with wpcli like wp plugin install <url of zipfile>

What about getting rid of the e2e-wp.js file and adding that work in the JS file of each instance (now named start-e2e-docker)? That way we wouldn’t need to add the specs to the workflow, we could add them in that file.

If we build abstractions for the things that e2e-wp.js and start-e2e-docker are doing and put them in the /scripts folder the code should be really simple but flexible at the same time. For example:

// e2e/instance-1/start.js
import { startWP, installPlugin } from "../scripts/wp";
import { startFrontity } from "../scripts/frontity";
import { startCypress } from "../scripts/cypress";

(async () => {
  await startWP();
  await installPlugin("yoast", "latest");
  await startFrontity();
  await startCypress({
    tests: ["wp-test", "other-test"],
  });
})();

The workflow could stay just like this:

jobs:
  run-e2e:
    strategy:
      matrix:
        instances: ["instance-1", "instance-2"]

For the tests that don’t require a WP installation, the file can be:

// e2e/instance-no-wp/start.js
import { startFrontity } from "../scripts/frontity";
import { startCypress } from "../scripts/cypress";

(async () => {
  await startFrontity();
  await startCypress({
    tests: ["tests-that-dont-need-wp-1", "tests-that-dont-need-wp-2"],
  });
})();

If in the future there are some e2e tests better handled by Jest, we can run those too:

// e2e/instance-1/start.js
import { startWP, installPlugin } from "../scripts/wp";
import { startFrontity } from "../scripts/frontity";
import { startCypress } from "../scripts/cypress";
import { startJest } from "../scripts/jest";

(async () => {
  await startWP();
  await installPlugin("yoast", "latest");
  await startFrontity();
  await startCypress({
    tests: ["wp-test", "other-test"],
  });
  await startJest({
    tests: ["jest-tests"],
  });
})();

Some other ideas:

  • Maybe startWP() and startFrontity() can, not only start docker and frontity, but also await until they are ready using waitOn().
  • Maybe startFrontity can do an npx frontity dev if NODE_ENV is development and npx frontity build && npx frontity serve if it’s production.
  • Maybe startCypress can run cypress.open if NODE_ENV is development and cypress.run if it’s production.

That is a setting on the DB, so you just have to login in WP, change it and dump the database.

yup, I think that should work :slight_smile: I’m trying that approach now.

I like those :slightly_smiling_face: :+1:

In the spirit of automation and since it seems you will be already using WP-CLI, you can use these commands to change the permalinks without having to login into WP.

wp option get permalink_structure
wp option update permalink_structure '/%postname%'

Thanks @filipe This is what we’re going to do, indeed.

BTW, we’re working on the E2E testing system in this PR: https://github.com/frontity/frontity/pull/450 (still work in progress) if you’d like to follow the progress :slight_smile:

I’ve made a video summary of the current status of the PoC:

Apologies for the sound quality - I didn’t notice that my headset did not work :blush:

Thanks for the video Michal. I have some questions:

  • Why did you move the WordPress setup from the start/bootstrap routine to the spec files? Wasn’t the whole purpose to do that once and then run all the specs of that instance without having to set it up again?

  • Do you have a list of the scripts/tasks that we need? For example:

    • Install plugin.
    • Remove plugin.
    • Replace database.
    • Dump/save database.
  • When do you think we’ll be able to start using it? :slight_smile:

Regarding the current e2e folder, I would merge it with the new approach within an instance folder that simply doesn’t initilizate WordPress.

Maybe if we are going to install/remove plugins and replace the database on each spec file, we are moving away from the need of running several WordPress intances in parallel… One instance would be enough.

I guess it’s going to be slower, but it’ll be simpler.

What do you think @mmczaplinski?

We discussed these questions a bit in today’s daily. This is the video: