Deploy frontity.org in Google Cloud Run

I wanted to test if it would be possible to deploy our web, frontity.org (https://github.com/frontity/frontity.org), in Google Cloud Run. I still want to do more tests, but I was able to deploy an initial version so I thought it could be interesting to other users as well. This is the current status:

And this is the process I followed:

1. Create a project in Google Cloud Run

The first step is to create a project to deploy our app. You should go to https://console.cloud.google.com and create it there. Select a name and a project ID will be assigned. In our case our project ID is frontity-org.

2. Enable the proper APIs

In the project you just have created, you should go to the APIs and enable the ones we will be using: Cloud Run API and Cloud Build API.

3. Get your app ready

You have to have a Frontity project you want to deploy. You can create one using our quickstart guide. We are using the one we already have for our web - https://github.com/frontity/frontity.org.

Apart from your app code, you have to add two files at the root of your project: Dockerfile and .gcloudignore.

Dockerfile

The Dockerfile I used for this first test is:

FROM node:12-alpine
RUN mkdir -p usr/src/app
WORKDIR /usr/src/app
COPY . .
RUN npm install --only=production
RUN npx frontity build
EXPOSE 8080
CMD ["npx", "frontity", "serve", "--port", "8080"]

.gcloudignore

This substitute the common .dockerignore. This is the one used for this first test:

README.md
node_modules
npm-debug.log
.git

4. Build you container

We start using Google Cloud and Docker now, so you should have Docker and Google Cloud SDK (https://cloud.google.com/sdk/docs/) installed.

In order to build it, you have to:

Login in gcloud

First step, is to authenticate, so you can work in our project. In order to do so you should run:

gcloud auth login

It will prompt a new tab in the browser to authenticate.

Set the project we want to work on

You have to select the project you want to work on with the following command:

gcloud config set project PROJECT_ID

In our case:

gcloud config set project frontity-org

Build the container

You have to use the following command, that will read the Dockerfile:

gcloud builds submit --tag gcr.io/PROJECT_ID/CONTAINER

In our case:

gcloud builds submit --tag gcr.io/frontity-org/frontity-org

5. Deploy to Google Cloud Run

Once the build has finished, and it has succeeded, you can deploy it using this command:

gcloud run deploy --image gcr.io/PROJECT_ID/CONTAINER --platform managed

In our case:

gcloud run deploy --image gcr.io/frontity-org/frontity-org --platform managed

It’ll ask you for the service name, the region, and if you want to allow unauthenticated. You can select any service name and region, but when the CLI ask if you want to allow unauthenticated, you have to select “yes”.

And that’s it! You may need to wait some minutes until the Service works as expected, but with these steps it should be enough.


The next test I’d like to run is to use Google Cloud CDN to serve the static files. According to its docs it seems possible. I’m planning to follow this guide to do it and I’ll post the results here.

Awesome :slight_smile:

It doesn’t seem to be working fine. It takes ages to start, like if it were doing some work that it shouldn’t be doing. Maybe we’re doing something wrong?

For what I see in the Quick Start Guide everything seems fine… Maybe change the starting command to "npm run serve -- --port 8080" instead of "npx frontity serve", just in case what’s taking long is the npx command that for some reason installs frontity each time, instead of using the copy in node_modules?

CMD ["npm", "run", "serve", "--", "--port", "8080"]

Doesn’t it work with the .gitignore file?

Yes, it doesn’t seem right, but I wasn’t able to find what I was doing wrong. I’ve been taking a look at the logs and, when it tries to start the server, sometimes it isn’t able and return this error:

Memory limit of 256M exceeded with 295M used.

We could extend the limit, but the time to start would be the same.

I’ve tried this but it doesn’t seem to work. However, taking a look at the logs there is one still saying this before starting the server:

frontity serve "--port" "8080"

Because of this I’m wondering if there could be a cache (or something similar) that still has something that is not correct from the first tests. After running the build command and pushing the image, I get this in the console:

Pushing gcr.io/frontity-org/frontity-org
The push refers to repository [gcr.io/frontity-org/frontity-org]
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Preparing
xxxxxxxxxxxxx: Waiting
xxxxxxxxxxxxx: Waiting
xxxxxxxxxxxxx: Waiting
xxxxxxxxxxxxx: Layer already exists
xxxxxxxxxxxxx: Layer already exists
xxxxxxxxxxxxx: Layer already exists
xxxxxxxxxxxxx: Layer already exists
xxxxxxxxxxxxx: Pushed
xxxxxxxxxxxxx: Pushed
xxxxxxxxxxxxx: Pushed
xxxxxxxxxxxxx: Pushed

The xxx are hashes. It tries to push 8 of these hashes which I guess they are the 8 different steps we have in the Dockerfile. Maybe the ones where it says “Layer already exists” aren’t updated and this is generating issues?

Not by default, but I think we could use the --ignore-file flag to point there.

I’ve deployed a new version with 512M from the GC Run UI. Let’s see if that solves the problem.

EDIT: No, it didn’t.

We could try running the command with the shell. That way we also have access to the env variables:

FROM node:12-alpine
WORKDIR /usr/src/app
COPY . .
RUN npm install --only=production
RUN npx frontity build
CMD [ "sh", "-c", "npx frontity serve --port $PORT" ]