Frontity Coupled Architecture + PHP/V8 Server Rendering

This is rambling experimental post, and I want to warn you I have very little to no experience in Frontity. These ideas may have been explored already/before – so apologies for the duplication if so.

As it happens, I was recently working on a prototype in essentially re-implementing the WordPress theming system in React – a highly coupled / embedded approach to writing WordPress React sites. I’ve pushed some non-working (well, works for me only) code to https://github.com/joehoyle/r/ and /joehoyle/r-example-theme if anyone is over-curious. In doing so, Ryan McCue recommended I might want to check out Frontity to see if efforts could be combined.

My main goals were essentially to have a WordPress theming framework that allows you to create Header.tsx just as you would header.php. R supports SSR only, Client only or hydrated. It basically maps WordPress rewrites to react-router automatically and implements the WP template hierarchy. A (maybe) novell thing with the approach is to run the v8 engine embedded in PHP via the v8js PHP extension which means there’s no need for a Node server for SSR, and you can cross-call between JS and PHP. The main advantage to that is being able to locally route REST API requests to not actually use the network in SSR mode (via rest_do_request()).

So, long story to say I essentially tried to go down the same route with Frontity: Implement a Frontity server in PHP/v8 that can provide SSR with your existing WP server alone, providing tighter coupling / integration (shock!) between the WordPress site and Frontity. Probably a major aside: I personally don’t find decoupled highly attractive, running two services (WP + node), caches, deploy workflows etc isn’t that appealing to me. I still want to build WordPress sites, but in React and have JS as first-class, not “progressively enhance”.

How did it go? Well I spend a few hours on building a new server that uses v8 in PHP-- I guess it’s not really a server per-say, more like a simple script that will render the page for a given context/url and return it back to PHP (with all the fetch shimming to internally service REST requests).

I got to the point where I was able to build a bundle (using esbuild (more on that in a bit)) that is able to execute in v8 – no small feat, as anyone that might have had to work in a non-node, non-browser JS context before. Essentially this means building for the browser context, but polyfilling missing libraries like URL, fetch etc.

It was all going swimmingly until I hit a snag with the components that use loadable. Again, I don’t know much/anything about loadable (there’s a theme here), but from what I gather it’s hard-tied to Babel and there’s some trickery needed to load loadable components synchronously. I wasn’t able to get any further yet – I have SSR working for the header / menu parts of the mars theme. The pre-population of data over the v8/php bridge is working too. I’m currently using renderToStaticMarkup as hydration requires a lot more complexity in grabbing the loadable chunks etc.

Here’s where I got too (no client rendering, server rendered in php-v8js)

At the moment I’m using esbuild to bundle Frontity and all the deps (side note: has anyone used https://esbuild.github.io/, it is AMAZING!. Unbelievably fast, and a simple but powerful plugin API). However, there does appear to be the blocker that loadable requires babel, so it looks like I might need to switch over to that and downgrade from 120ms total build times to more like 2 minutes with webpack+babel!

I do wonder if maybe I am doing something wrong with loadable, as when using renderToStaticMarkup I think it’s meant to sync resolve the modules (which are in my bundle), but alas, I didn’t work that out yet.

So, that’s where I got to. I also just discovered this community forum so I thought it was probably time to ask if this has all been tried before, or if it’s at all an interesting direction to pursue. I did stumble on frontity/frontity-embedded-proof-of-concept, which looks like it might have similar goals to more tightly integrate Frontity with the WP site. That’s kinda also what I’m looking at – but ultimately I think a lot tighter with doing SSR via PHP too.

I didn’t push up any of this code yet – it basically ended up being a fork of R but instead of rolling my own react layer, it attempts to switch in Frontity to do what it does best. The code is in even worse shape than R (and that’s saying something!)

Happy holiday hacking!

3 Likes

Wow, @joehoyle, I am glad and honored to have you here! Welcome to our forum and many thanks for sharing your work with us. I can’t believe you made Frontity somehow work with PHP/V8! Really interesting :grinning_face_with_smiling_eyes:

We did explore the PHP/V8 approach years ago but we decided not to use it for the Frontity framework. I will try to elaborate on the reasons why we decided that.

I know Human Made doesn’t have the same requirements/constraints as Frontity, so some of these points may not apply. I hope at least that they give you an idea of our point of view.

1. Using PHP/V8 requires WordPress server access

Relying on the fact that PHP/V8 is available in the WordPress server is not ideal because it means that:

  • People using managed WordPress hosting services couldn’t use Frontity because they don’t have server access to install PHP/V8.
  • Even for those who have server access, it would mean that setting up Frontity would require some “sysadmin” skills.

By relying on an external NodeJS server instead, people don’t need any change in their WordPress server and can use whatever NodeJS server they prefer for Frontity, like Vercel, Heroku, or even serverless services like AWS Lambdas or Google Functions.

But relying on an external NodeJS server doesn’t mean that the users will need to use a NodeJS-only server like those I mentioned. WordPress hosting providers can still provide solutions to host both WordPress + NodeJS and we are currently working with Pantheon, WP Engine, and VIP in that regard.

And people can also run NodeJS locally in their WordPress if they want (section 2).

My point here is that PHP/V8 is a much more restricting constraint than a NodeJS server.

2. Local access to NodeJS/WordPress is still possible

If people want to run Frontity in their own WordPress server, they can install and run NodeJS instead of PHP/V8, and run it locally.

If they don’t want to use the network, WordPress and Frontity can call each other using localhost.

How faster is using PHP/V8 instead of doing a request to localhost is something I don’t know, but it is certainly not the bottleneck of the system. So even if a request to localhost is somehow slower, I doubt it will be relevant in the final picture.

Actually, going over the network and the cache system for the REST API requests can have benefits because a decoupled system like Frontity can benefit from a “two-layer cache system” (section 3) and makes the code compatible between different server/client and different modes (section 5).

My point here is that even without PHP/V8 you can still run NodeJS and WordPress on the same server with a similar setup.

3. Accessing the REST API over the network can be a good pattern

When working with decoupled architectures, it can be interesting to divide the cache into two different layers: markup (HTML) and data (REST API) and invalidate them independently.

This is a video I did to explain this “two-layer cache” pattern to the VIP team to start exploring it with them: https://www.youtube.com/watch?v=A-aDdX0mTL0. The idea is to cache the REST API aggressively and invalidate only the HTML files after a deployment.

A big publisher (+10M pageviews, +300k indexed pages) we collaborate closely with is already using this two-layer cache pattern with Frontity and the results are impressive so this is something we want to keep working on and encourage.

My point here is that accessing the REST API locally during the SSR is not that important, or can even be less useful in some scenarios.

4. The Decoupled and ReverseProxy modes

We totally agree with you that an ideal “WordPress theming system in React” needs to be tightly integrated and as less disturbing to the WordPress workflow as possible. And you are right, that is the whole reason for the Embedded mode, where Frontity is just used to replace the PHP theme.

The PHP/V8 approach is similar to using Frontity in Embedded mode, which is great. But the Embedded mode is only one of the three modes we support in Frontity:

  • Decoupled mode: The normal, two domain/stacks, headless approach.
  • Embedded mode: Single domain. Replace the PHP theme execution with an HTTP request to the Frontity server.
  • ReverseProxy mode: Single domain. A reverse proxy like NGINX manages the routing between WordPress and NodeJS.

The problem with PHP/V8 or the Embedded mode is that you cannot take advantage of the fact that you don’t need to run the whole WordPress code to generate the HTML anymore.

I don’t mean that the Embedded mode is bad. For most of the Frontity users, we think Embedded mode should be the way to go. But for big publishers or big hosting providers where keeping a low server load is critical, we have started recommending the ReverseProxy mode.

This is a more detailed explanation of the three modes and their differences: https://www.notion.so/The-Frontity-Modes-72a61c1aef7a45a6931b0db54612e489

And this is a demo of the ReverseProxy configuration: https://www.youtube.com/watch?v=YpNJb4Lq44E

My point here is that using an external NodeJS server is more flexible than using PHP/V8, which only allows the Embedded mode.

5. Code should be as universal as possible

There is another point in favor of accessing the data using an HTTP request to the REST API: even though using PHP/V8 or the Embedded mode opens the possibility of injecting the WordPress content directly (for example by using rest_do_request as you mention) that implementation would be exclusive of “SSR in Embedded mode”. It won’t work for:

  • Client-side navigations.
  • SSR in Decoupled or ReverseProxy modes.

For that reason, we try to make everything accessible through the REST API and encourage people to have a good REST API caching strategy instead.

Also, using an HTTP request to get the HTML in Embedded mode means that we can reuse the very same HTTP request to get the HTML in both Decoupled and ReverseProxy mode. Frontity doesn’t know in which mode is being used, and we think that is a good thing.

My point here is that using code universal to any server configuration and client-rendering makes it more simple and flexible at the same time.

6. PHP/V8 is not widely adopted

The sad truth is that PHP/V8, even though it is an awesome project, is not widely adopted/used yet, and therefore the risk of ending up with a discontinued technology is higher than going with a simple NodeJS server.

I hope that changes in the near future, they start seeing a lot of adoption and I wish them all the best. I really do, because making open source software is not easy. But that risk is something that needs to be taken into account from a business perspective.

In Spanish we describe this situation as “a snake that eats its own tail” because the fact that is not widely adopted means that projects won’t bet for it, which turns into less adoption.

All projects suffer from this situation at the beginning of course, this is not exclusive of PHP/V8, and it affects Frontity as well. For that reason, we have raised funding and we now have a team of very talented people working on this full-time. Also, each step we make in our partnerships with WordPress hosting providers, WordPress agencies, and each new site running Frontity makes it less likely to fail so we will continue working to increase the adoption.

This is a bit the same situation with esbuild (or Parcel, Snowpack…) vs Webpack. At this point, even though these new builders can do some things better than Webpack (like the startup time), they are not the standard solution and that means it is easier to find things that are not compatible with them yet.

My point here is that for a framework like Frontity, it makes sense to stay close to the industry standards and reduce this type of risk as much as possible.


So, as you can see, it was not a single point, but a sum of small points that made it not worth it for a framework like Frontity.

I would love to know which of these points make sense to you and which don’t to understand better the Human Made point of view.

Oh, and please send my regards to Ryan :wave::slightly_smiling_face:

5 Likes

Sorry for the delay in response @luisherranz, Xmas and all that!

Thanks for sharing your motivations and thoughts around the nodeJS vs V8 approach. I think you’ve made the right decisions here for the Frontity project that probably suits the most users. The ReverseProxy mode also looks quite interesting.

On 5: Universal code: I totally agree – rest_do_request is the REST API, it just means internally routed to the rest API, so it should always be the case that clientside and decoupled SSR can use all the same stuff, it’s just a more direct way to make API requests. As it happens, there’s significant overhead in a REST API request in WordPress, so I still think this is one advantage of the V8 route: a page that might make 15 API requests is a lot of load on the PHP system, but 15 calls to rest_do_request is perhaps 1/2 of the work. Granted with another caching layer in between this isn’t the case for every page load.

That is one issue I have with the decoupled approach though, for pure “viewer” experiences of static-style websites it’s ok to have a cache. For things like logged-in user experiences where you don’t want a cache layer, whereby the WP application “behind” the front-end node system is making changes then you will end up having more backend live API requests and thus suffer from the overhead of all those API requests. However, this is a trade-off depending on use case of course.

I think you hit the nail on the head with 6: v8js adoption. It’s not Frontity’s battle to get adoption of V8js. Given that the extension is virtually unsupported really doesn’t help my case :stuck_out_tongue: I dream for a new supported extension that embeds V8 with support for things like the v8 debugger protocol, ESM and the like. Again, I think you have absolutely made the correct decision by not pursuing this as Frontity. Personally I am in “prototype the future” mode, where user adoption, and really any real-world practicalities don’t apply so much!

This applies the same with esbuild and the like, I hope at some point in the future projects like this push the industry forward. While acknowledging how much progress has been made over past years in things like front-end bundling, decoupled architectures, and many things in the JS ecosystem, it’s also all pretty terrible. Am I mean that in a “things can be so much better” kind of way! Projects like esbuild, and I hope my experiments with V8/PHP linking and others challenge the norms and try to prove potential paths forward. I’m equally interested in adoption of Deno, first-class support for TypeScript for example I think moves what people expect from the Node workflow. It’s quite possibly not practical to adopt just things yet, but on an infinite timescale things will get better!

1 Like

Happy Christmas and New Year :smile:

Yeah, good point.

That’s true. Although there is actually an interesting angle to that, I wonder what is your opinion about it.

Once you move to a technology that is capable to do client-side navigation, you are only interested in SSR to improve the performance of the first load (both for SEO and for user experience) because the subsequent navigations are handled on the client-side.

Logged-in pages don’t benefit from SEO, so the only SSR benefit that remains is the user-perceived performance of the first load.

Also, logged-in applications don’t usually need to handle much deep-linking. The first page users see is usually either the login screen or a home screen. After that point, the rest is client-side navigation. It’s true that there can be some deep-linking between coworkers/friends that share links to internal resources, but not much more.

I think that is the reason very few logged-in applications use SSR anymore. Most of them are simply client-side SPAs.

There are also applications that have both public content (that is indexed and deep-linked, so it benefits from SSR) and logged-in content. A good example is e-commerce: products are public and cart/checkout pages are kind of “logged-in”.

In that regard, we did a preliminary proof-of-concept of a possible @frontity/woocommerce package that would abstract the API connection (similar to what @frontity/wp-source does for the regular REST API). This is the result:

Part of the experiment was to learn how much of the site could be aggressively cached using this approach. We were able to cache:

  • All the HTML files.
  • All the REST API requests, except the /wp-json/store/cart and /wp-json/store/checkout endpoints.

For the cart and the checkout pages, which are our “logged-in” pages, we served a cached HTML that renders a loader, so the user sees something in the screen fast. Thanks to the loader, she/he knows that the site is working and the cart is coming. This is somehow similar to the “app shell model”.

With this approach, once the site is cached you can serve static files for all the HTML/REST API requests, and the WordPress server load is reduced to handling the cart and checkout through the REST API.

I wonder what’s your opinion about this new trend of not doing SSR for the logged-in parts of a site, and what drawbacks do you see :slightly_smiling_face:

Haha, I love that. Absolutely true. And that is also usually more fun too :grinning_face_with_smiling_eyes:


By the way, after I answered you, I had another interesting conversation about if it would be possible to do SSR of Server Components with Gutenberg Full Site Editing, no NodeJS/V8 server required.

It’s a crazy idea and we are still years away from something like that happening, but it made me think about the relation between the WordPress community and the rest of the web development community, especially the JS/React ecosystem.

I realized that, even though WordPress is so big that we can always do things the WordPress way, there is still a huge benefit in connecting WordPress more closely with the rest of the web development ecosystem. I’ve always felt like WordPress was too separate from the rest of the community, and this decoupled architecture is a great opportunity to join forces again.

In that sense, what we are trying to do at Frontity is to do things the Node/npm way, so people using Frontity can leverage the full potential of the JS/React ecosystem without restrictions, and the power of WordPress at the same time. If we continue doing things the WordPress way, we will be creating gaps and constraints between us and the JS/React community again.

And I think it would be beneficial, not only because WordPress devs using JS/React tools like Frontity can take advantage of the potential of the JS/React ecosystem, but also because, at some point, we should start attracting JS/React web developers to WordPress. And that will happen only if these new frontend tools belong to the JS/React ecosystem.

So this could make a kind of an extra point:
7. Do things the Node/npm way to leverage the full potential of the JS/React ecosystem and attract JS/React developers.

Well, that’s kind of a horrible title but I guess the point is clear :laughing:

1 Like

Touché! https://github.com/privatenumber/esbuild-loader :exploding_head::grinning_face_with_smiling_eyes::grinning_face_with_smiling_eyes: