As I studied an open-source codebase with intent to contribute, I used ChatGPT as a note-taking tool. This is my side of one of our conversations.
Explain the following three files and how they work together in this open-source codebase. I’m assuming they were used to start the production server on Heroku, because I used Next’s dev server during development.
Makefile:
dev:
docker compose up --build --remove-orphans --renew-anon-volumes
docker-compose.yml:
version: '3.7'
services:
app:
build:
dockerfile: Dockerfile
volumes:
- ./src:/app/src
ports:
- 3000:3000
Dockerfile:
FROM node:18.16-alpine
WORKDIR /app
# Install dependencies.
COPY package.json .
COPY yarn.lock .
RUN yarn install
# Copy config.
COPY next.config.js .
COPY postcss.config.js .
COPY tailwind.config.js .
COPY tsconfig.json .
# Copy the application code.
COPY public public
COPY src src
# Run the application.
CMD [ "yarn", "dev" ]
Sounds about right. So the app ran inside a container, as it probably would be via Render, but the container had to be provided and configured by the developer – so it’s basically one abstraction layer lower than Render?
Does Render in fact run its servers in docker containers? I know that it builds on AWS infrastructure, as most serverless platforms do, but the approach is unclear. I know that the web server itself is Nginx, which serves your app’s production assets, but I wasn’t sure if the environment instance was OS- or hardware-level.
So to clarify, does Heroku require configuration / provision of the container as the author had done here, or does it offer the option to abstract this as with Render?
Was this choice likely out of familiarity with the platform or intent for control?