Proof of concept on Pantheon + Deploy on Google Cloud Platform

I’ve taken a look at your PR. Great work!

@stevepersch are the TERMINUS_ENV and TERMINUS_SITE variables present at runtime or only during the deploy?

If they are present at runtime, you can try using node env variables to simplify the deploy to avoid the sed. Something like this:

const env = process.env.TERMINUS_ENV;
const site = process.env.TERMINUS_SITE;
const url = `https://${env}-${site}.pantheonsite.io`;
const api = `${url}/wp-json`;

const settings = {
  state: {
    frontity: {
      url,
      // ...
    }
  },
  packages: [
    {
      name: "@frontity/wp-source",
      state: {
        source: {
          api,
        }
      }
    },
  ]
}

And you can easily add a default url when env and site are undefined, maybe pointing to your local WordPress installation.

If those are only present at deploy and can’t be added to the runtime let me know and we’ll think the best way to solve it :slight_smile:

Thanks for the tip @luisherranz! That env change worked just fine.

I was able to set the env variables in the serverless runtime with

gcloud functions deploy ${TERMINUS_SITE}--${TERMINUS_ENV} --project=serverlessplayground --runtime=nodejs10 --trigger-http  --allow-unauthenticated  --entry-point=default --source=frontity/build --set-env-vars="TERMINUS_SITE=${TERMINUS_SITE},TERMINUS_ENV=${TERMINUS_ENV}"

I’m looking at ways to serve the compiled JS from the static directory. I know we discussed it on or call earlier but I might be forgetting some details.

I see at least four options (ordered by least appealing to most appealing to me):

  1. Shoehorn these files onto Pantheon’s regular platform. Pantheon’s own Docs site, https://pantheon.io/docs, is a Gatsby site that is deployed in parallel with the Drupal 7 site that is the rest of Pantheon.io. The build process here could put these static files inside WordPress.
  2. Use Cloud Run instead of Cloud Functions. I’d really rather not build Docker containers for each deployment.
  3. Deploy the content of the static directory to Cloud Storage and add complexity to the proxying logic so that requests to /static/* are served off Cloud Storage. Our proxying logic will almost certainly need to accommodate multiple sources eventually, but I’d rather not do that yet. You mentioned PublicPath in a comment above, that could help, though it might not be necessary with with proxying option.
  4. Alter Frontity so that it can serve requests to /static when deployed in a serverless fashion. I’m not sure how hard this would be. There’s a potential downside of overloading the serverless function for requests to static assets. But with a CDN in front of everything that should not matter.

What do you think?

@stevepersch I’ve created a small POC to see if it’s possible to embed JS files and images/fonts in a single server.js file to serve them with a serverless function.

It seems like it’s possible :slight_smile:

I think that would be the easier option, but people need to be really careful with the size of the images because, for example, it would be fine to embed a small image for the logo, but not more than that.

Tomorrow I’ll open a new feature discussion to start a proper conversation about this :+1:

From the rest of the options, I think the best one is 3. We can take a look at configuring the publicPath as well.

I have opened the Feature Discussion for the embeds: Embed static files on the server

I also took a look at our server code and it should be fairly easy to add a publicPath option (npx frontity build --publicPath="https://some.path") but we had the idea of exposing the whole Webpack configuration for both projects and packages through a frontity.config.js file.

Changing the publicPath would look like this:

// frontity.config.js
export const webpack = ({ config }) => {
  config.output.publicPath = "https://some.path";
}

Of course you can use env variables in that file too:

export const webpack = ({ config }) => {
  if (process.env.PUBLIC_PATH)
    config.output.publicPath = process.env.PUBLIC_PATH;
}

I have opened a Feature Discussion for that too: Customize Webpack configuration

Finally, I thought about a 5th possibility:

  1. Allow different SSR and static urls in the PHP Theme Builder.

The URLs could be:

  • SSR: https://us-central1-your-account.cloudfunctions.net/frontity
  • Static: https://storage.cloud.google.com/your-frontity-bucket*

*I’m not sure if that is a valid G Storage URL but you can get the idea.

The PHP Theme Bridge would be in charge of using one URL or the other when requests are for the /static/ folder (Static URL) or for anything else (SSR URL).

This is different than the 3rd option in the sense that in this case it would be WordPress the one serving those assets, with the same final URL and the same CDN configuration.

@stevepersch let me know what you think about all these options :slight_smile:

By the way, Frontity adds a unique hash to all the static files. So it’s safe to use an immutable maxage for the cache-control whole /static folder.

Hi @luisherranz, thanks for sharing this progress! Option 5 would definitely work though I’m not sure if I’d rank it above option 3 because I’m looking to get the benefit of Pantheon’s CDN. Putting everything behind one domain as far as the CDN and the public is concerned is preferable to me compared to exposing the fact that the static directory is sourced from a some other place.

And the more I think about it, the more I think that I do just need PHP/Theme Bridge to be able to handle multiple backends, so option 3 is fine.

I’m curious in your experience how large the static directory can become for real sites. How large are the largest assets? How big can the overall directory get? Any large image would be an image would be a part of the design/theme, not part of the content of the site, yes?

Let me understand this better:

  1. Is the CDN caching requests of the main domain? (HTML requests)
  2. Or is the CDN only used for static assets in a different domain?

Apart from that, does your CDN honor the cache-control headers?

These are the cache-control headers we recommend for Frontity: https://github.com/frontity/now-builder/blob/master/src/index.ts#L116-L128

    [
      // HTML.
      {
        src: `/.*`,
        headers: { "cache-control": "s-maxage=1,stale-while-revalidate" },
        dest: `/server.js`
      },
      // Static assets.
      {
        src: `/static/(.*)`,
        headers: { "cache-control": "public,max-age=31536000,immutable" },
        dest: `/static/$1`
      }
    ]

We recommend stale-while-revalidate for the HTML for small sites because that way it always serves the cached response, but it’s not important for big sites because 99.9% of the visits will get a cached response.

s-maxage is 1 because is a safe default, but of course this should be configured by the site owners, depending on their needs.

For the static folder, we just set it as immutable because all the files include a unique hash.

I don’t quite understand this. Point 5 does precisely that: it puts everything under the same domain. Remember that when the PHP Theme Bridge is in charge, it does an internal HTTP request and returns the response, so everything that goes through it will use the WordPress domain.

Maybe we need some diagrams with all the scenarios to make sure we’re talking about the same cases :slight_smile:

A Frontity server should always remain quite small.

The current server size is 1Mb, including React, Koa, the Frontity core and the rest of the libraries required. I guess the most complex app possible shouldn’t get bigger than 3Mb.

The client assets are a bit smaller, 600Kbs. If we embed them in the server, it’d be 1.6Mb. I guess a very complex app with client assets embedded could be 5Mb, which is still a tolerable size for serverless.

Adding fonts or an image for the logo is fine. But to store images included in the WP content, they should use their WordPress site, not Frontity.

But they shouldn’t doesn’t mean they won’t… Maybe, if people start doing it we will need to teach them that they should use WP for that.

Is option 3 using the publicPath option or doing the routing with your own proxying logic?

I don’t quite understand this. Point 5 does precisely that: it puts everything under the same domain.

@luisherranz Ah, yes, I definitely misread what you had in mind for point 5. Sorry about that!

Maybe we need some diagrams with all the scenarios to make sure we’re talking about the same cases :slight_smile:

Yes, :smile: this is what I had in mind for point 3, and I think it is also might be what you were imagining for point 5.

Pantheon’s CDN does respect cache-control headers and it caches both static assets and HTML responses with headers designating them as cache-able.

Thanks for sharing the recommended cache-control headers from the now-builder example. That looks like Now-specific configuration. Is there a way to get the Frontity server to emit those headers?

A Frontity server should always remain quite small.

Cool, those numbers align with what I expected.

Hey, I’m glad we are on the same page. Thanks for the diagram, that was exactly what I meant :slight_smile:

Frontity doesn’t add any headers right now. It could add default headers,

Server extensibility, which is currently on the roadmap, will allow any Frontity package to extend the underlying Koa* server.

For example, this will add the recommended cache-control headers:

export default {
  // Your normal package stuff: state, actions, React...
};

// Extend Koa server.
export const server = ({ app }) => {
  app.use(ctx => {
    if (ctx.path.startsWith("/static/")
      ctx.set("cache-control", "max-age=31536000,immutable");
    else
      ctx.set("cache-control", "s-maxage=1,stale-while-revalidate");
  });
};

We also have in mind to create an official package for this. I’m not sure if @frontity/cache-control or a more general @frontity/headers.

People will be able to configure this @frontity/cache-control package in their Frontity settings.

  • With the default settings:
const settings = {
  packages: [
    "@frontity/cache-control",
    // ...
  ]
};
  • With custom settings:
const settings = {
  packages: [
    {
      name: "@frontity/cache-control",
      state: {
        headers: {
          ssr: "s-maxage=300"
        }
      }
    }   
  ]
};

*We choose Koa over Express because it’s half the size, easier to use and don’t include dynamic imports (bad for bundling it in a single server.js file).

Awesome :smile:

@stevepersch I have a question for you.

What is the benefit of using a proxy over just replacing the PHP theme?

I ask because if you are going to cache the final HTML anyway, the performance gain is going to be minimal and with that type of proxy some of the things that usually work in WordPress are going to break: direct access to PHP files used by some plugins, plugins that add custom files/routes like sitemaps, robots or ads.txt, plugins for 301 redirections, plugins that modify headers…

My idea was to use template_include to overwrite the current theme and wp_remote_get for the request so I’d love to hear your opinion on this matter :slight_smile:

Hi @luisherranz!

Are you asking about doing the proxying in a CDN instead of in PHP? Yeah, I’ve been working on the assumption that developers will be skeptical of proxying in PHP because they know it’s faster and more scalable to do so in a CDN. But you’re right that cache-hits make the difference negligible in most situations.

For me, it’s less about specific technical benefits as it is about the mental model. In the ideal diagram, would Frontity be inside of WordPress or next to WordPress?

I think I’ve been mentally drawing the ideal diagram with Frontity next to WordPress. But maybe in the ideal diagram, Frontity would be inside WordPress, replacing the theme. The fact that getting the HTML from Frontity requires calling out to a separate Node.js environment could be abstracted away similar to the way an object cache API abstracts away the detail of whether your cache objects are coming from PHP memory, a database, or Redis.

Putting Frontity inside WordPress would be less disruptive to existing WordPress sites. It could allow plugins to alter responses and do things you’re pointing out like sitemaps, etc (although even when Frontity is outside WordPress there would need to be configurable logic for which paths go to which runtime, PHP or Node)

Putting Frontity next to WordPress might be a clearer mental model for newcomers. I think it would allow front-end developers to have a better understanding of what is happening where compared to a model where PHP has the potential to alter the output.

Another very tangible version of this question is “one repo or two?” As you’ve seen, the sample projects I’ve done with Frontity combine the repos. Frontity becomes a directory inside the WordPress codebase and deployments to PHP and Node.js happen in the same deployment pipeline. But would teams prefer two repos with two deployment pipelines?

Either model can be optimized for performance. Which model is better for optimizing understanding?

Yes, those are precisely our thoughts on the matter and the reason I asked you :slight_smile:

  • If Frontity is next to WordPress then we are talking about Headless WordPress.
  • But if Frontity is inside WordPress then we are talking about an alternative React rendering engine for WordPress.

Both approaches are very interesting but the second one is more natural to WordPress. So we think it will make more sense for hostings like Pantheon, which are already taking care of the CDN/caching of WordPress output gracefully. But please give it a thought and let me know.

We plan to support both architectures by the way.

I don’t think the mental model of the PHP Theme Bridge will be hard to understand. It may be even easier for those not familiar with the Headless CMS architecture.

I guess that’s up to each team to decide. But in my opinion, having both codebaes in the same repo makes sense because, as you said in an early message, sometimes a new feature will contain changes in both WP and Frontity that must be tested and deployed simultaneously.

1 Like

Hey @stevepersch, I hope you are doing great in these uncertain times :slight_smile:

I just wanted to share with you a proof of concept of the Theme Bridge:

The implementation is so simple that it doesn’t make much sense to do an explanation. Just take a look at the code :slight_smile:

The only problem I’ve stumbled upon so far was that the normal Nginx configuration for WordPress doesn’t send static file requests (like js, fonts, images…) to WordPress, it just returns a 404 if it doesn’t find those assets in the file system.

To solve that, people would have to either:

  1. Change their Ngix configuration.
  2. Use a different publicPath setting to request the static assets directly from their Node server (or static storage).
  3. Remove extensions from Webpack. For example, file.js becomes file--js.

But we don’t like this second idea because that means they have to configure an additional cache/CDN for that URL and we don’t like the third idea because, well… it feels really hacky and incompatible with any other system that expects files to have proper extensions.

I’d love to know you solved the Nginx config problem in Pantheon. I guess you already had that problem before with the HackyProxy, hadn’t you?

@SantosGuillamot is going to open a Feature Discussion here in our forum to start talking about the possible features and configurations.


Apart from that, I forked the simple-cache plugin to add the Content-Type header of static assets.

I must say that I am amazed by the performance. It’s faster than I thought. Once it’s cached, my local server answers in about 5ms for both the HTML and static assets from Frontity. It’s not something you need for Pantheon, but I’d like to mention it as well :slight_smile:

Hey @luisherranz,

Sorry for the delay here, I’m holding up!

On Pantheon, we hit that nginx limitation you describe on woff2 files but luckily not js, css, or images. I think that’s because Pantheon was first built for Drupal which sometimes creates those files on-demand.

To accommodate the “Frontity inside of WordPress” case, we’ll need to alter nginx or sync those assets to an unversion-controlled directory on the PHP container.

Would you be open to a pair programming session later this week where we could try to set up theme-bridge-poc on the Pantheon+GCP Cloud Function architecture?

Absolutely :+1:

I’ll send you an email.

To summarize our meeting: The Theme Bridge PoC plugin is working great in Pantheon :tada:

The instant cache invalidation of Pantheon is working great with this approach, both for the HTML and the REST API requests.

Next steps:

  • Frontity
    • Add build-time configuration of publicPath.
  • Pantheon:
    • Find a way to upload the /static folder to the WordPress file system after running npx frontity build in the CI.

Once we have those things, we can test a real site in their infrastructure.

After that, we can work on other not so critical issues:

  • Frontity
    • Release the final Theme Bridge plugin with support for using ENV variables to change the settings.
    • Add dynamic configuration of publicPath.
  • Pantheon
    • Create an upstream of WordPress+Frontity.
3 Likes

I have added a Feature Discussion for the publicPath and I gues we will include it in the next sprint: Change publicPath.

Can you @SantosGuillamot confirm that?

We haven’t finished planning what we’re going to do the next sprint but this is one of our top priorities so it’ll be probably included yes.

1 Like

Finally got a chance to play with this POC and I’m frightened by how well it worked. Like… it just works - I was not expecting that. This is on 2 separate Digital Ocean instances (one nginx and one node jobby).

I just popped it on my WP instance and it pulled the JS from the separate external node machine. Awesome.


(medusa.403page.com is the hostname for my backend for 403page.com - for this test you can see it’s serving both)

Testing on WP Engine next.

1 Like

Confirmed working nicely on WP Engine (nginx).

It’s a valid point about the /static/ folder. Got 404’s out of the box as expected. But I added this nginx rule easily enough to make it work:

# Serve /static from node server through main domain
location ~* ^/static/?(.*) {
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Cache-Control $http_cache_control;
    proxy_pass [http://MY_NODE.JS_SERVER_IP:3000];
}

It’d be very cool if that could be handled inside PHP - but I’m not sure how/if that would work yet. Gonna play around a bit and see…

P.S. @stevepersch is that nginx rule easy enough to implement on Pantheon on a per user basis? Might be a good first step for now at least.