Deploy frontity.org in Google Cloud Run

I wanted to test if it would be possible to deploy our web, frontity.org (https://github.com/frontity/frontity.org), in Google Cloud Run. I still want to do more tests, but I was able to deploy an initial version so I thought it could be interesting to other users as well. This is the current status:

And this is the process I followed:

1. Create a project in Google Cloud Run

The first step is to create a project to deploy our app. You should go to https://console.cloud.google.com and create it there. Select a name and a project ID will be assigned. In our case our project ID is frontity-org.

2. Enable the proper APIs

In the project you just have created, you should go to the APIs and enable the ones we will be using: Cloud Run API and Cloud Build API.

3. Get your app ready

You have to have a Frontity project you want to deploy. You can create one using our quickstart guide. We are using the one we already have for our web - https://github.com/frontity/frontity.org.

Apart from your app code, you have to add two files at the root of your project: Dockerfile and .gcloudignore.

Dockerfile

The Dockerfile I used for this first test is:

FROM node:12-alpine
RUN mkdir -p usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install --only=production
RUN npx frontity build
EXPOSE 8080
CMD ["npx", "frontity", "serve", "--port", "8080"]

.gcloudignore

This substitute the common .dockerignore. This is the one used for this first test:

README.md
node_modules
npm-debug.log
.git

4. Build you container

We start using Google Cloud and Docker now, so you should have Docker and Google Cloud SDK (https://cloud.google.com/sdk/docs/) installed.

In order to build it, you have to:

Login in gcloud

First step, is to authenticate, so you can work in our project. In order to do so you should run:

gcloud auth login

It will prompt a new tab in the browser to authenticate.

Set the project we want to work on

You have to select the project you want to work on with the following command:

gcloud config set project PROJECT_ID

In our case:

gcloud config set project frontity-org

Build the container

You have to use the following command, that will read the Dockerfile:

gcloud builds submit --tag gcr.io/PROJECT_ID/CONTAINER

In our case:

gcloud builds submit --tag gcr.io/frontity-org/frontity-org

5. Deploy to Google Cloud Run

Once the build has finished, and it has succeeded, you can deploy it using this command:

gcloud run deploy --image gcr.io/PROJECT_ID/CONTAINER --platform managed

In our case:

gcloud run deploy --image gcr.io/frontity-org/frontity-org --platform managed

It’ll ask you for the service name, the region, and if you want to allow unauthenticated. You can select any service name and region, but when the CLI ask if you want to allow unauthenticated, you have to select “yes”.

And that’s it! You may need to wait some minutes until the Service works as expected, but with these steps it should be enough.


The next test I’d like to run is to use Google Cloud CDN to serve the static files. According to its docs it seems possible. I’m planning to follow this guide to do it and I’ll post the results here.

1 Like

Awesome :slight_smile:

It doesn’t seem to be working fine. It takes ages to start, like if it were doing some work that it shouldn’t be doing. Maybe we’re doing something wrong?

For what I see in the Quick Start Guide everything seems fine… Maybe change the starting command to "npm run serve -- --port 8080" instead of "npx frontity serve", just in case what’s taking long is the npx command that for some reason installs frontity each time, instead of using the copy in node_modules?

CMD ["npm", "run", "serve", "--", "--port", "8080"]

Doesn’t it work with the .gitignore file?

Yes, it doesn’t seem right, but I wasn’t able to find what I was doing wrong. I’ve been taking a look at the logs and, when it tries to start the server, sometimes it isn’t able and return this error:

Memory limit of 256M exceeded with 295M used.

We could extend the limit, but the time to start would be the same.

I’ve tried this but it doesn’t seem to work. However, taking a look at the logs there is one still saying this before starting the server:

frontity serve "--port" "8080"

Because of this I’m wondering if there could be a cache (or something similar) that still has something that is not correct from the first tests. After running the build command and pushing the image, I get this in the console:

Pushing gcr.io/frontity-org/frontity-org
The push refers to repository [gcr.io/frontity-org/frontity-org]
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Waiting
xxxxxxxxxxxxx: Waiting
xxxxxxxxxxxxx: Waiting
xxxxxxxxxxxxx: Layer already exists
xxxxxxxxxxxxx: Layer already exists
xxxxxxxxxxxxx: Layer already exists
xxxxxxxxxxxxx: Layer already exists
xxxxxxxxxxxxx: Pushed
xxxxxxxxxxxxx: Pushed
xxxxxxxxxxxxx: Pushed
xxxxxxxxxxxxx: Pushed

The xxx are hashes. It tries to push 8 of these hashes which I guess they are the 8 different steps we have in the Dockerfile. Maybe the ones where it says “Layer already exists” aren’t updated and this is generating issues?

Not by default, but I think we could use the --ignore-file flag to point there.

I’ve deployed a new version with 512M from the GC Run UI. Let’s see if that solves the problem.

EDIT: No, it didn’t.

We could try running the command with the shell. That way we also have access to the env variables:

FROM node:12-alpine
WORKDIR /usr/src/app
COPY . .
RUN npm install --only=production
RUN npx frontity build
CMD [ "sh", "-c", "npx frontity serve --port $PORT" ]

@luisherranz @SantosGuillamot . I’ve followed the tutorial step by step and it works. But indeed it takes ages to build and a lang time to start when a user/visitor visits it for the first time.

I’ve noticed in my log that it starts the server when visited for the first time (in a new session), (i am aware of Cold start):

SERVER STARTED – Listening @ http://localhost:8080

But when i refresh the page everything seems to be fine and it starts right away (no signs in the logs of starting the server).

Hey @frankoonk, thanks for testing this out! :smile:

The problem is that the “cold start” should take less than a second and right now it’s taking about 30 seconds. We need to keep investigating this.

Have you tried running the CMD as a shell script?

CMD [ "sh", "-c", "npx frontity serve --port $PORT" ]

We’ve been running more tests and it seems it’s a problem with the commander library we’re using for the CLI. If we skip it and we run the serve script directly, the cold start takes ~1sec. For this, what we did was creating a server.js file with the following content and running CMD ["node", "server.js"] instead in the Dockerfile.

server.js:

const serve = require("@frontity/core/dist/src/scripts/serve").default;

serve({
  isHttps: false,
  port: 8080,
});

Comparing the different possibilities we have been trying, these are the times of the cold starts:

  • Using the common frontity comand it takes 20-30 seconds:
CMD [ "sh", "-c", "npx frontity serve --port 8080" ]
  • Not using npx just in case it was the problem takes also 20-30 seconds:
CMD ["npm", "run", "serve", "--", "--port", "8080"]
  • Creating the server.js and skipping commander takes ~1sec:
CMD ["node", "server.js"]

This lead us to think that is commander causing the issues, so we should consider to skip commander for the frontity serve, as it is the most critical step.

1 Like

It looks like there are more people having problems with commander in serve (Error: Did you forget to run "frontity build"?) so maybe we have to fix that.

I’ve opened an issue to deal with this. Until we solve it, I guess the best option is to create the server.js file and use node server.js instead of npx frontity serve.

1 Like

The issue with the serve command has been solved in the latest release. Now, we can use it and the server takes ~1sec to start, so the server.js file is not needed anymore. Now we can use something like this in the Dockerfile:

CMD [ "sh", "-c", "npx frontity serve --port 8080" ]