The Laravel framework will be used to validate the interoperability of the infrastructure created by Terraform. The steps are similar for any LAMP, LEMP and other similar applications. I have used simple/straight forward codes to showcase the functionality. Please use necessary abstraction so that code outlives the application.
Start by creating a new Laravel project in your development machine and commit all the initial files created by the script. Please refer to the documentation for detailed instructions.
Install the driver for Laravel framework to communicate with S3.
composer require league/flysystem-aws-s3-v3 "^3.0"
Update the schedule method in the file app/Console/Kernel.php
with the following:
$schedule->command( Log::error('Invoking scheduler at '.date('Y-m-d H:i:s')) )->everyMinute();
The above code tells the Laravel scheduler to run the logging task every minute.
Create a new file in this location app/Http/Controllers/SampleController.php
. The methods mentioned in the file are used for invoking a job through the Laravel queue, creating a file in S3, and creating a row in the RDS table respectively.
<?php namespace App\Http\Controllers; use App\Jobs\IPLogger; use Illuminate\Http\Request; use Illuminate\Support\Facades\DB; use Illuminate\Support\Facades\Storage; class SampleController extends Controller { public function logIPthruQ(Request $request) { IPLogger::dispatch($request); return response()->json(['message' => 'IP logged successfully']); } public function createInS3() { Storage::put(date('Y-m-d-H-i-s').'.txt', date('Y-m-d H:i:s')); return response()->json(['message' => 'S3 file created successfully']); } public function createInRDS() { DB::table('logs')->insert(['created_at' => date('Y-m-d H:i:s')]); return response()->json(['message' => 'RDS insert successful']); } }
Create the job class for processing the queued job in the location app/Jobs/IPLogger.php
:
<?php namespace App\Jobs; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldBeUnique; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Foundation\Bus\Dispatchable; use Illuminate\Http\Request; use Illuminate\Support\Facades\Log; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Queue\SerializesModels; class IPLogger implements ShouldQueue { use Dispatchable, InteractsWithQueue, Queueable, SerializesModels; public $ip; /** * Create a new job instance. * * @return void */ public function __construct(Request $request) { $this->ip = $request->ip(); } /** * Execute the job. * * @return void */ public function handle() { Log::error('Request from IP address: '.$this->ip); } }
Next, we would need to add configuration in the file config/database.php
so that the Laravel application can communicate with the Redis cluster. Add the following inside the redis
key:
'cluster' => env('REDIS_CLUSTER_ENABLED', false),
Continuing inside the redis
key, create a new block for configuring the values of the Redis cluster:
'clusters' => [ 'default' => [ [ 'host' => env('REDIS_HOST', 'localhost'), 'password' => env('REDIS_PASSWORD'), 'port' => env('REDIS_PORT', 6379), 'database' => 0, ], ], 'cache' => [ [ 'host' => env('REDIS_HOST', 'localhost'), 'password' => env('REDIS_PASSWORD'), 'port' => env('REDIS_PORT', 6379), 'database' => 1, ], ], ],
Register the following routes in the file routes/web.php
to invoke the controller functions.
Route::get('/q', 'App\Http\Controllers\SampleController@logIPthruQ'); Route::get('/s3', 'App\Http\Controllers\SampleController@createInS3'); Route::get('/rds', 'App\Http\Controllers\SampleController@createInRDS');
Commit and push the changes to a new repository. A new repository so that we can isolate application code and infrastructure code. All the codes related to the above section are available in this GitHub link.
Create the file start.sh
with the content:
#!/bin/sh env=${APP_ENV:-production} if [ "$env" != "local" ]; then (cd /var/www/html && php artisan config:cache && php artisan route:cache && php artisan view:cache) fi role=${CONTAINER_ROLE:-app} if [ "$role" = "app" ]; then exec /usr/local/sbin/php-fpm elif [ "$role" = "scheduler" ]; then exec /usr/sbin/crond -f -l 8 elif [ "$role" = "worker" ]; then exec /usr/local/bin/php /var/www/html/artisan queue:work redis --no-interaction --sleep=3 --tries=3 else echo "Could not match the container role \"$role\"" exit 1 fi
This file picks up the role of the Laravel application depending on the environment variable CONTAINER_ROLE
i.e. the web server will be started by default if the CONTAINER_ROLE
is omitted or the CONTAINER_ROLE
value is app
, scheduler will be run if the CONTAINER_ROLE
value is scheduler
and queue worker will be started if the CONTAINER_ROLE
is worker
.
Create a file called laravel_scheduler
containing the cron script for invoking the Laravel scheduler
* * * * * cd /var/www/html && /usr/local/bin/php artisan schedule:run --no-interaction 2> /proc/1/fd/1 1> /dev/null &
The final part of this process is to set up a Dockerfile
for the Laravel application. We will be using the Alpine image and will install the necessary extensions. We then copy the Laravel project folder, start.sh
script and the laravel_scheduler
script. Next, we expose the port 9000
used by PHP-FPM and execute the start
script.
FROM php:8.1-fpm-alpine # Set working directory WORKDIR /var/www/html RUN apk update && apk add --no-cache oniguruma-dev libzip-dev \ # Install MySQL extensions && docker-php-ext-install pdo_mysql mbstring zip exif pcntl \ # Install PHP-Redis && apk add --no-cache pcre-dev $PHPIZE_DEPS \ && pecl install redis \ && docker-php-ext-enable redis.so # Copy existing application directory contents with directory permissions COPY --chown=www-data:www-data project /var/www/html # Copy the startup script COPY start.sh /usr/local/bin/start RUN chmod u+x /usr/local/bin/start # Add Docker custom crontab script ADD laravel_scheduler /etc/cron.d/laravel_scheduler # Specify crontab file for running RUN crontab /etc/cron.d/laravel_scheduler # Expose the webroot directory to NGINX container VOLUME ["/var/www/html/public"] # Expose port 9000 of the php-fpm server EXPOSE 9000 CMD ["/usr/local/bin/start"]
The code for this section is available in this GitHub commit. I have created a Docker image containing all the codes for this section. It can be pulled to your machine by executing the command:
docker pull chynkm/laravel-ecs:latest
NGINX will be used as the web server for the Laravel application. It will proxy the relevant requests to PHP-FPM. Create a file called default.conf
which will hold the following content:
server { gzip on; gzip_proxied any; gzip_types text/plain application/json; gzip_min_length 1000; listen 80; listen [::]:80; server_name localhost; root /var/www/html/public; index index.php index.html index.htm; location / { try_files $uri $uri/ /index.php?$query_string; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass localhost:9000; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; fastcgi_read_timeout 150; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } # Healthcheck location = /health_check { access_log off; add_header 'Content-Type' 'application/json'; return 200 '{"status":"UP"}'; } access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log error; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; add_header X-Frame-Options "SAMEORIGIN"; add_header X-Content-Type-Options "nosniff"; add_header Referrer-Policy 'same-origin'; }
The important aspect of the file is the /healthcheck
endpoint. This will be used by Docker to validate the container’s health status. Logging this endpoint isn’t necessary as it’s an additional overhead for the container without any benefit. Another point to note is that the Laravel application codebase isn’t copied into the NGINX container, rather it will be shared from the Laravel Docker container. The volume share will be configured in the ECS task definitions.
Create the Dockerfile
for the NGINX application. We will use the stable Alpine image.
FROM nginx:stable-alpine # Copy custom configuration file COPY default.conf /etc/nginx/conf.d/default.conf
The code for this section is available in this GitHub commit. I have created a Docker image containing all the changes mentioned for this section. It can be pulled to your machine by executing the command:
docker pull chynkm/nginx-ecs:latest
Sidecar containers let you run two tightly coupled containers together i.e. the two containers share resources like the storage and network interfaces. Using a sidecar container reduces the latency for data/network access. Also, sidecar containers can be developed in different languages than the main container. We will be using NGINX as a sidecar container for our PHP-FPM based Laravel application. I will be considering NGINX as the sidecar, since PHP-FPM does all the heavy lifting of the application. A picture speaks a thousand words.
Part 1 – Getting started with Terraform
Part 2 – Creating the VPC and ALB
Part 3 – Backing services
Part 4 – Configuring the LEMP stack in Docker
Part 5 – ECS