![]() |
| Deploying a Laravel app on Kubernetes (AWS EKS) |
Deploy a Laravel App on Kubernetes (AWS EKS)
This is a step-by-step guide: what each part does, what to copy, and what often goes wrong — written so you can follow it without already being a Kubernetes expert.
php-fpm speaks FastCGI, not normal web (HTTP) traffic. You cannot point a browser load balancer straight at PHP-FPM port 9000. You need Nginx or Apache in front. This guide uses Nginx + PHP-FPM in one container, started by Supervisor, and exposes port 80 to Kubernetes.
The big picture — how a request reaches Laravel
So in YAML, the Service targetPort must match what speaks HTTP inside the pod — here, 80 (Nginx), not 9000.
What you need installed
- AWS account with permission to create EKS, ECR, VPC-related resources, and IAM roles.
- AWS CLI configured (
aws configure). - Docker (to build the image).
- kubectl (talks to Kubernetes).
- eksctl (easy way to create EKS) or you can create the cluster in the AWS Console instead.
- A Laravel project that runs locally or in Docker before you add Kubernetes.
Pros and cons — Laravel on EKS
- Run multiple copies of your app (pods) for traffic and updates.
- Rolling updates: new version gradually replaces old.
- Fits well with RDS, S3, ElastiCache, ACM (HTTPS certs).
- One place to run Laravel and other services (APIs, workers) if you already use Kubernetes.
- Cost and learning: control plane, nodes, load balancers, and time to learn YAML.
- You must plan sessions, cache, queues, and uploads — pods restart and move.
- For one small site, Forge, Vapor, ECS, or a single VM is often simpler.
Prepare Laravel for production
Goal: no debug leaks, fast config, database and cache not on the container disk.
APP_ENV=production,APP_DEBUG=false.- Generate
APP_KEYonce:php artisan key:generate— store the value in a Secret in Kubernetes, not in Git. - Database: point
DB_HOSTto RDS (or managed Postgres). Open security groups so only your cluster/VPC can reach the DB. - Sessions: use
database,redis, orcookie— notfileif you run many pods (each pod has its own disk). - Cache / queues:
redis+QUEUE_CONNECTION=redis(ElastiCache) is a common pair. - Uploaded files: use
FILESYSTEM_DISK=s3(or EFS) so files are not lost when a pod restarts.
Optimize during the Docker build (or in CI):
composer install --no-dev --optimize-autoloader --no-interaction
php artisan config:cache
php artisan route:cache
php artisan view:cache
Do not run php artisan migrate on every container start if you have more than one replica — two pods might migrate at once. Use a one-off Job (below) or your CI pipeline.
Docker: Nginx + PHP-FPM (complete pattern)
Add these files next to your Dockerfile (example paths: docker/nginx.conf, docker/supervisord.conf).
docker/nginx.conf (example)
Serves the public/ folder and sends .php to PHP-FPM.
server {
listen 80;
server_name _;
root /var/www/public;
index index.php;
client_max_body_size 64M;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
}
docker/supervisord.conf (example)
Starts PHP-FPM first, then Nginx. Paths may vary slightly by image — adjust if Supervisor complains.
[supervisord]
nodaemon=true
user=root
[program:php-fpm]
command=/usr/local/sbin/php-fpm --nodaemonize
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Dockerfile (example)
FROM php:8.2-fpm
RUN apt-get update && apt-get install -y --no-install-recommends \
nginx supervisor git zip unzip curl \
libpng-dev libjpeg-dev libonig-dev libxml2-dev libzip-dev \
&& docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /var/www
COPY . .
COPY docker/nginx.conf /etc/nginx/sites-available/default
RUN ln -sf /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default 2>/dev/null || true
COPY docker/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer \
&& composer install --no-dev --optimize-autoloader --no-interaction \
&& php artisan config:cache || true \
&& chown -R www-data:www-data /var/www/storage /var/www/bootstrap/cache
EXPOSE 80
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
Note: If config:cache fails during build because .env is missing, either copy a build-time .env with safe placeholders or run caches at deploy time via an init script — many teams run config:cache in CI with env injected.
.dockerignore (recommended)
Keeps the image smaller and avoids copying secrets from your laptop.
git
.env
node_modules
vendor
storage/logs/*
bootstrap/cache/*.php
Usually you do run composer install inside Docker (as above), so you may remove vendor from .dockerignore if you build from a clean tree — pick one strategy: install in Docker (common) or copy pre-built vendor from CI.
docker build -t laravel-local . then docker run -p 8080:80 --env-file .env laravel-local and open http://localhost:8080. If this fails, Kubernetes will not fix it.
Build the image and push to ECR
ECR is AWS’s private Docker registry. EKS nodes can pull from it without sending the image over the public internet to Docker Hub.
- Create a repository (once):
aws ecr create-repository --repository-name laravel-app --region us-west-2 - Log in Docker to ECR:
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.us-west-2.amazonaws.com - Build, tag, push (replace account ID and region):
docker build -t laravel-app:latest . docker tag laravel-app:latest <ACCOUNT_ID>.dkr.ecr.us-west-2.amazonaws.com/laravel-app:latest docker push <ACCOUNT_ID>.dkr.ecr.us-west-2.amazonaws.com/laravel-app:latest
Docker Hub (docker push username/laravel-app) still works for learning; for production on AWS, ECR is typical.
Create the EKS cluster
eksctl create cluster --name laravel-cluster --region us-west-2 --nodes 2 --node-type t3.medium
This takes several minutes. Then connect kubectl:
aws eks update-kubeconfig --name laravel-cluster --region us-west-2
kubectl get nodes
You should see your nodes Ready.
Kubernetes: Deployment and Service
Use the same image URI you pushed to ECR. Set containerPort: 80.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: laravel-app
spec:
replicas: 2
selector:
matchLabels:
app: laravel
template:
metadata:
labels:
app: laravel
spec:
containers:
- name: laravel
image: <ACCOUNT_ID>.dkr.ecr.us-west-2.amazonaws.com/laravel-app:latest
imagePullPolicy: Always
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: laravel-config
- secretRef:
name: laravel-secret
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "1000m"
memory: "512Mi"
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 15
periodSeconds: 10
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 45
periodSeconds: 20
service.yaml
apiVersion: v1
kind: Service
metadata:
name: laravel-service
spec:
type: LoadBalancer
selector:
app: laravel
ports:
- port: 80
targetPort: 80
protocol: TCP
AWS will show an EXTERNAL-IP or hostname on the load balancer — that is your public URL until you add a custom domain and HTTPS.
ConfigMap and Secret (what goes where)
| Put in ConfigMap (non-secret) | Put in Secret |
|---|---|
APP_ENV, APP_DEBUG, LOG_CHANNEL, DB_HOST, DB_DATABASE, DB_USERNAME, AWS_DEFAULT_REGION | APP_KEY, DB_PASSWORD, API keys, mail passwords |
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: laravel-config
data:
APP_ENV: "production"
APP_DEBUG: "false"
LOG_CHANNEL: "stderr"
DB_CONNECTION: "mysql"
DB_HOST: "your-rds.region.rds.amazonaws.com"
DB_PORT: "3306"
DB_DATABASE: "laravel"
DB_USERNAME: "laravel"
# secret.yaml — use real values; do not commit to public git
apiVersion: v1
kind: Secret
metadata:
name: laravel-secret
type: Opaque
stringData:
APP_KEY: "base64:PASTE_YOUR_KEY"
DB_PASSWORD: "PASTE_DB_PASSWORD"
Laravel reads these as environment variables — match names to config/database.php and config/app.php (DB_*, APP_KEY).
Database migrations — Kubernetes Job (one-time)
Run migrations once per deploy, not in every pod.
apiVersion: batch/v1
kind: Job
metadata:
name: laravel-migrate
spec:
template:
spec:
restartPolicy: Never
containers:
- name: migrate
image: <ACCOUNT_ID>.dkr.ecr.us-west-2.amazonaws.com/laravel-app:latest
command: ["php", "artisan", "migrate", "--force"]
envFrom:
- configMapRef:
name: laravel-config
- secretRef:
name: laravel-secret
backoffLimit: 2
Apply after Deployment exists: kubectl apply -f migrate-job.yaml. Check: kubectl logs job/laravel-migrate.
Sessions, cache, uploads — don’t rely on pod disk
storage/ on disk can vanish after a restart or scaling event.
- Uploads / public files: S3 + Laravel filesystem, or mount EFS (shared volume) if you must use local APIs.
- Sessions: database, Redis, or encrypted cookies — not plain files across replicas.
- Cache: Redis or Memcached (ElastiCache).
Apply manifests and verify
kubectl apply -f secret.yaml
kubectl apply -f configmap.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl rollout status deployment/laravel-app
kubectl get pods
kubectl get svc laravel-service
Get the load balancer address from EXTERNAL-IP or HOSTNAME (AWS often uses hostname). Open it in a browser.
Logs from one pod:
kubectl logs deploy/laravel-app --tail=100
HTTPS and production hardening (short)
- ACM certificate + Ingress with AWS Load Balancer Controller (ALB) — standard on EKS.
- HPA (Horizontal Pod Autoscaler) to scale on CPU/memory.
- Separate workers for queues: second Deployment running
php artisan queue:work. - CloudWatch or Prometheus for metrics and logs.
If something fails — quick checks
| Symptom | What to check |
|---|---|
ImagePullBackOff | Image name/tag wrong; ECR login; IAM on nodes to pull from ECR. |
CrashLoopBackOff | kubectl logs — bad .env / Secret keys; Nginx/PHP config path; Supervisor. |
| 502 / timeout from load balancer | Readiness probe path — use /health if you add one; Security groups must allow LB → nodes. |
| App loads but DB error | RDS security group, DB_HOST, credentials in Secret, database exists. |
| Session lost between requests | Using file sessions with multiple pods — switch to Redis/database. |
Short glossary
- EKS — AWS-managed Kubernetes control plane.
- Pod — one or more containers running together (here: your Laravel image).
- Deployment — keeps N copies of the pod running and rolls out updates.
- Service — stable network address (and load balancer) to reach pods.
- ConfigMap / Secret — inject configuration; Secrets are for private data.
- Ingress — HTTP routing, often used with HTTPS and hostnames.
Use Nginx on port 80 inside the image, put secrets in Secret, database on RDS, run migrations as a Job, and treat pods as replaceable (S3/Redis for state).
That’s normal. A smaller team often ships faster with Forge, Vapor, or one EC2 first — then move to EKS when you need scale or many services.




Comments from Facebook