RÉSUMÉ
4d ago12 min

elePHPant in the Room

A story of moving from a JS-only mindset to PHP-native deployment to save costs and automate GoDaddy.

weblaravelfilamentlivewirephpbladeci/cdgodaddydevops

TL;DR: Using Laravel and GitHub Actions to automate GoDaddy deployment to avoid going past client's budget.

elePHPant in the Room
An elephant with Laravel for earrings and tattoos of GoDaddy (left leg) and PHP (right leg)

Introduction

Anyone who knows me knows I live and breathe JavaScript (TypeScript). I prefer a JS ecosystem for building mobile and web apps - it's where I feel most at home. But recently, I faced a classic client dilemma: they wanted zero extra costs for a VPS, no complicated database overhead, and they already had a GoDaddy account they wanted to keep everything on.

In this post, I'll walk you through how I navigated the transition from a "Next.js by default" mindset to a fully automated PHP / Laravel stack for this project. (Feel free to check out the Hailerz GitHub repo here, or see the deployed site live at tiwafoods.com - though note that domain will be changing later!)


The Dilemma: JavaScript vs. Reality

When the client said "GoDaddy," I immediately started weighing my options. I had to balance my love for TypeScript with the reality of shared hosting constraints.

The options on my whiteboard:

  • Next.js with Static Export: Great for speed, but what about the backend? I'd need a headless CMS or an external API, which adds the "extra costs" the client specifically vetoed.
  • Next.js + MySQL: Shared hosting can run Node.js sometimes, but it's often a versioning nightmare and notoriously unstable on lower-tier plans.
  • Next.js + PHP Backend: Possible, but static exports can have Next-specific routing issues when paired with a legacy Apache server.
  • Go Full PHP (Laravel + Livewire + Filament): The "old school" route, but built on modern foundations.

Tu tu tu tu Dora! Dora Dora Dora the Explorer!

So, I went with PHP. Why? Because as much as developers love chasing new frameworks, the reality is that PHP still quietly runs the web. W3Techs reports that PHP powers over 71% of websites with a known server-side language, driven largely by the fact that WordPress alone accounts for ~43% of the entire internet.

On top of that, most clients are already anchored to shared hosting setups. Shared hosting remains the most popular entry point, capturing nearly 38% of the hosting market, with platforms like GoDaddy as the default choice. Recognizing this, I thought it would be a good opportunity for me to finally dance to the piper's tune.

Because PHP is natively supported by GoDaddy with little to no configuration, I didn't have to fight the infrastructure. I wasn't deeply familiar with GoDaddy's modern quirks initially, so I used AI to navigate the landscape. Once I logged in, I hit home with the familiar cPanel layout - it felt like a tech archaeology trip.

Choosing Laravel (specifically the TALL Stack) allowed me to:

  1. Provision a new MySQL database directly through the cPanel wizard.
  2. Keep the entire project in a single repo.
  3. Leverage Filament for a beautiful admin panel that took minutes to set up instead of days of custom React forms.

The Technical Onboarding: Getting My Hands Dirty

Once I committed to PHP, I had to translate that decision into a working server environment. Here is the exact step-by-step journey of turning a blank GoDaddy account into a Laravel-ready machine.

Got GoDaddy Credentials

The first hurdle was purely logistical. The client had the account, but I needed the keys. After a few emails, I received the cPanel login details - a username and a password. This was my entry ticket into the hosting environment. Without these, nothing else mattered.

Navigated with AI Assistance

I will be honest: I am not a cPanel native. I grew up on Terminals and cloud dashboards. The GoDaddy interface felt like stepping into a time machine. I had no idea where the File Manager was, how to find the MySQL wizard, or even where to look for the terminal.

I leaned heavily on AI. I would take a screenshot of whatever page I landed on, upload it, and ask "Where do I click next?" This back-and-forth guided me to critical tools like the File Manager (for viewing files), the MySQL Database Wizard (for setting up the database), and eventually the Terminal (for running Artisan commands). It was slow, but it worked.

Activated SSH Access

The terminal was my goal. I hate clicking through file trees. But when I first looked for the "Terminal" icon in cPanel, it was nowhere to be found. I discovered that GoDaddy disables SSH by default on shared hosting plans for "security reasons."

I had to back out of cPanel entirely and go to the main GoDaddy hosting dashboard. Under the "Settings" tab, I found a toggle labeled "SSH Access." I flipped it to "On" and waited about ten minutes for the server to provision the access. When I returned to cPanel, the Terminal icon had magically appeared in the "Advanced" section.

Generated SSH Keys

Typing a password every time I wanted to run a command or push a deployment felt wrong. I am an Arch Linux user - I value automation. On my local machine, I generated a dedicated SSH key pair specifically for this GoDaddy server:

ssh-keygen -t ed25519 -f ~/.ssh/godaddy

This created a private key (godaddy) and a public key (godaddy.pub). The ed25519 algorithm is modern, secure, and fast.

Added Public Key to the Server

Having a key pair is useless if the server doesn't trust you. I used the ssh-copy-id utility to upload my public key to the GoDaddy server:

ssh-copy-id -i ~/.ssh/godaddy.pub username@72.167.250.36

After entering my cPanel password one last time, the public key was added to the ~/.ssh/authorized_keys file on the server. From that moment on, I could log in without a password, which is a prerequisite for any kind of automated CI/CD pipeline.

Created a Production Database

An empty server is just a server. I needed a place to store talent profiles and bookings. Back in cPanel, I opened the "MySQL Database Wizard."

I walked through the steps:

  1. Created a database.
  2. Created a database user.
  3. Autogenerated a strong, complex password.
  4. Crucially: On the final step, I clicked "ALL PRIVILEGES." Without this, Laravel would not be able to run migrations or write to the tables.

Configured the .env File

With the database ready, I needed to tell Laravel how to connect to it. I SSH'd into the server and manually created the .env file inside the hailerz directory using vim, obviously.

vim /home/username/hailerz/.env

I populated it with the database credentials from the previous step, set APP_ENV=production, and added the admin credentials for the seeder. This is also where I learned a hard lesson: if your database password contains special characters like #, @, or $, you must wrap the entire password string in double quotes. Otherwise, Laravel's parser chokes, and you get a cryptic connection failure.

DB_PASSWORD="[DB_PASSWORD]"
ADMIN_EMAIL="[EMAIL_ADDRESS]"
ADMIN_PASSWORD="[PASSWORD]"

The Deployment Disaster: Nuking the Portfolio

Every explorer hits a trap eventually. In my first attempt to deploy, I accidentally nuked the client's existing portfolio site. Because I was trying to point things to the root public_html, I managed to overwrite the live site.

This disaster forced a pivot to a different domain, tiwafoods.com. Unfortunately, this domain was in a state of absolute neglect. It had no SSL configured and the DNS records were pointing to old DreamHost servers instead of GoDaddy. I had to manually update the A records, swap the nameservers and essentially rebuild the domain's plumbing from scratch before even thinking about code.


Set up the Symlink Structure

The final puzzle piece was the file structure. GoDaddy expects websites to live in public_html, but Laravel's entry point is inside the public/ folder. I could not just move my Laravel app into public_html because there was already a live client site running there.

I needed to isolate Tiwa Foods into its own "room." I created a dedicated directory for the addon domain:

mkdir -p /home/username/tiwafoods.com
ln -s /home/username/hailerz/public /home/username/tiwafoods.com/public

Then, in cPanel's "Domains" section, I pointed the document root of tiwafoods.com to /[tiwafoods.com/public](https://tiwafoods.com/public). This kept the client's site completely untouched while giving Tiwa Foods its own clean slate.


Why spend 2 minutes on a manual upload when you can spend 2 hours automating it?

From basic arithemetic, given an infinite amount of time, not matter how long it takes to automate a task, you will eventually make up for that time spent automating.

One of the biggest pain points of shared hosting is the "FTP Drag-and-Drop" trauma. I refused to do that. With SSH and passwordless login working, I setup a CI/CD pipeline using GitHub Actions.

Here is the logic I used to bridge my local development with the GoDaddy server:

# .github/workflows/deploy.yml
name: Deploy to GoDaddy
on:
  push:
    branches: [ main ]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Build Assets
        run: |
          npm install
          npm run build
          
      - name: Sync to Server
        uses: appleboy/scp-action@master
        with:
          host: ${{ secrets.HOST }}
          username: ${{ secrets.USERNAME }}
          key: ${{ secrets.SSH_PRIVATE_KEY }}
          source: "."
          target: "~/hailerz"

      - name: Remote Commands (Migrate and Cache)
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.HOST }}
          username: ${{ secrets.USERNAME }}
          key: ${{ secrets.SSH_PRIVATE_KEY }}
          script: |
            cd hailerz
            php artisan migrate --force
            php artisan optimize

Overcoming the "Forbidden" 403

Shared hosting doesn't always like Laravel's /public folder structure. I had to get creative with symbolic links to ensure public_html stayed clean.

I created an isolated directory and symlinked it:

ln -s /home/username/hailerz/public /home/username/tiwafoods.com

The process was exhausting. I spent hours staring at a "403 Forbidden" error and an insecure HTTP warning. Eventually, I gave up and went to sleep, leaving the site broken.

When I woke up, the "magic" had happened. Because the nameserver changes had finally propagated, GoDaddy's AutoSSL was able to verify the domain. The site had automatically transitioned to HTTPS with a healthy certificate. As it turns out, the "Forbidden" issue was largely tied to the server refusing to serve content over an unverified connection while the app was expecting secure protocols. Everything was working perfectly.


The Final Boss: Debugging Open Graph Images in Production

This has been an excellent, real-world debugging journey through the complexities of server environments versus local development. When moving an application from a local setup (like Laravel Sail or Valet) to a shared hosting production environment (like GoDaddy cPanel), Open Graph (OG) image generation often breaks due to rigid security and resource constraints.

Here is a detailed breakdown of the issues we encountered, the hypotheses we tested, and the final architectural solution that successfully resolved the broken OG metadata.

Phase 1: The Initial Issue – Blank White OG Cards

The Problem: In the local development environment, sharing a Talent profile link successfully generated an Open Graph preview card with the talent's image and the agency logo overlaid on top. In production, however, the OG tags were present, but social sharing debuggers (like Facebook or LinkedIn) rendered a completely blank, white box instead of the expected composite image.

The Hypothesis: The initial assumption was an issue with the fallback logic or a missing primary image. We suspected that when a talent lacked an uploaded profile photo, the application was attempting to generate a composite image with a null base, resulting in a blank output.

The Initial Fixes:

  1. Robust Accessors: We rewrote the getProfilePhotoUrlAttribute in the Talent model to guarantee an image URL was always returned. If the primary image was missing, it fell back to a dynamic initials generator (ui-avatars.com).
  2. Controller Hardening: We updated the OgImageController to catch exceptions during the image composition process using Intervention Image, returning a solid blue fallback canvas if anything failed.

The Result: This fixed the missing initials issue on the standard UI cards, but the OG endpoint still failed in production.

Phase 2: The Silent Crash – 500 Server Error

The Problem: Upon closer inspection using a direct URL test (bypassing the social scrapers and hitting the /og/talent/{slug} endpoint directly in an incognito window), the server was throwing a 500 Internal Server Error.

The Hypothesis: A 500 error during image processing usually indicates one of three things:

  1. The PHP GD extension is missing or lacks WebP support.
  2. The server is running out of memory (Memory Exhaustion) while decoding the image into RAM.
  3. A syntax error in the Intervention Image library (e.g., using v4 syntax on a v3 installation).

The Attempted Fixes:

  1. GD Extension Verification: We confirmed via cPanel's "Select PHP Version" that the gd extension was checked and active.
  2. Memory Limit Increase: We advised increasing the PHP memory_limit to 256M or 512M in cPanel to ensure large images could be unpacked into memory.
  3. Syntax Alignment: We ensured the controller was using the correct, unified syntax for Intervention Image.
  4. Temporary Debugging: We implemented a temporary try/catch block that forcefully dumped the raw exception to the browser (dd($e->getMessage())) to expose the hidden error.

Phase 3: Uncovering the Root Cause – The Database Cache Conflict

The Problem: The temporary debug dump revealed the true culprit inside the laravel.log:

SQLSTATE[22007]: Invalid datetime format: 1366 Incorrect string value: '\x9D\x01*\xB0\x04v...' for column 'hailerz'.'cache'.'value'

The Analysis: This log entry completely shifted our understanding of the problem. The issue was not with generating the image itself; the image was being generated perfectly. The crash occurred when Laravel attempted to save that generated image.

  1. The Goal: The OgImageController used Cache::remember() to store the generated image so it wouldn't have to rebuild the composite logo overlay every time a social bot scraped the page.
  2. The Output: The controller output raw, binary WebP image bytes (a string of unreadable data like \x9D\x01*\xB0).
  3. The Environment: On shared hosting, the Laravel cache driver is almost always set to database (storing cache in MySQL) rather than redis or memcached (which handle binary data effortlessly).
  4. The Crash: MySQL text or mediumtext columns are designed for UTF-8 string characters. When Laravel attempted to execute an INSERT statement containing raw binary image data into the cache table, MySQL immediately rejected it with an Incorrect string value error, triggering the fatal 500 crash.

Phase 4: Base64 Encoding

The Strategy: To cache binary image data inside a standard MySQL database column, the data must be transformed into a safe, text-only format.

The Fix: We refactored the caching logic in the OgImageController:

  1. Encoding for Storage: Before saving the image to the cache, we passed the raw bytes through PHP's base64_encode(). This converted the binary image data into a long, harmless string of alphanumeric characters that MySQL happily accepted.
  2. Decoding for the Browser: When retrieving the image from the cache (or after generating it), we passed the base64 string through base64_decode() to turn it back into raw binary bytes.
  3. Serving the Response: We served those decoded binary bytes to the browser with the correct Content-Type: image/webp headers.

The Code Implementation:

// Step 1: Cache the Base64 String
$base64Image = Cache::remember($cacheKey, 86400, function () use ($talent) {
    // ... image generation logic ...
    
    // Return the BASE64 encoded string so MySQL accepts it
    return base64_encode($image->encodeUsingMediaType('image/webp')->toString());
});

// Step 2: Decode back to binary
$imageBytes = base64_decode($base64Image);

// Step 3: Serve the image
return Response::make($imageBytes, 200, [
    'Content-Type'  => 'image/webp',
    'Cache-Control' => 'public, max-age=86400',
]);

Phase 5: The Final Hurdle – Scrapers Hate WebP

Just when I thought I had defeated the final boss with the base64 caching fix, I ran into a classic scraper compatibility issue.

The Problem: Testing the newly cached images revealed an annoying inconsistency. Sharing the link on WhatsApp worked perfectly, but strict scrapers like LinkedIn, iMessage, and debugging tools like OpenGraph.xyz were still failing, rendering a blank box or a broken link icon.

The Hypothesis: While modern web browsers love WebP because it's fast and lightweight, social media scrapers are notoriously outdated. WhatsApp and Discord have updated their bots to understand image/webp. However, LinkedIn, iMessage, OpenGraph.xyz, and some Facebook servers strictly expect image/jpeg or image/png. If you hand them a WebP file, they silently fail.

The Fix: The industry standard for Open Graph images is JPEG. It's universally supported by every single scraper on the internet. I needed to tell my OgImageController to encode the final image as a JPEG instead of a WebP.

Leaning on my AI workflow, I drafted a prompt to quickly refactor the controller code:

# Role & Context
You are a Senior Full-Stack Laravel Developer. We are fixing an Open Graph scraper compatibility issue. Currently, our `OgImageController` generates and serves `image/webp`. While this works for some platforms (like WhatsApp), strict scrapers (like LinkedIn and OpenGraph.xyz) are rejecting the WebP format, resulting in broken OG previews.

# 🚨 Your Task: Convert Output to JPEG
* **Target:** `app/Http/Controllers/OgImageController.php`
* **Action:** Modify the `show` method to encode, cache, and serve the image as a standard JPEG instead of WebP.

Please apply these specific changes to the `show` method:
1. Update the cache key to `$cacheKey = "talent_og_v5_{$talent->id}";` (We must bump the version to bypass the currently cached WebP strings).
2. Change the encoding lines inside the try/catch blocks from:
   `encodeUsingMediaType('image/webp')` 
   to:
   `encodeUsingMediaType('image/jpeg')`
3. Change the final Response header from:
   `'Content-Type' => 'image/webp'` 
   to: 
   `'Content-Type' => 'image/jpeg'`

**Output:** Provide the fully updated `OgImageController.php` code.

The Result: Once the updated controller was pushed via the CI/CD pipeline, I jumped back into OpenGraph.xyz and hit the "Fetch New Scrape" button to bust their cache. The lights finally came on. The OG cards rendered perfectly! (And if it had still failed after the JPEG switch, my fallback theory was that GoDaddy's "ModSecurity" firewall was actively blocking the OpenGraph bot—thankfully, we didn't have to go down that rabbit hole).


Conclusion

Will I be using PHP again in the future, maybe? Possibly.