How I Fixed a Slow API Response Time in Laravel & Node.js
🛑 The Problem — “Why Is My API So Slow?”
A client once contacted me with a frustrated tone:
“Our mobile app API is getting slower every day. Sometimes it takes 6–10 seconds just to load a simple list.”
Six to ten seconds might not sound huge — but in web performance terms, that’s an eternity. Users think the app is broken. Support tickets start pouring in. And yes… Google also hates slow APIs.
The stack was:
• Backend: Laravel API
• Secondary Service: Node.js microservice
• Database: PostgreSQL
At first glance, everything looked fine.
But clearly… something wasn’t.
So I rolled up my sleeves and started investigating.
🔍 Step 1 — Confirm the Problem (Never Assume)
Before changing anything, I always measure first.
I used:
Sure enough…
• Some endpoints were fast (80–150ms)
• But others took 2–10 seconds
So the API was not globally slow — only some routes.
That’s good news, because it means the problem is isolated.
🕵️ Step 2 — Find the “Slow” Endpoint
The slow endpoint was:
It returned a list of user orders with product data included.
The problem smelled like a database issue.
So I ran a query log.
In Laravel:
The result shocked me.
❌ There were hundreds of queries running
Why?
Because this line was triggering N+1 queries:
But the relationship in the model used:
Except…
The real relationship was belongsTo, not hasOne.
So eager loading didn’t work correctly.
Meaning:
• Fetch orders
• Loop orders
• Fetch product each time
• One request = 300+ database calls
No wonder it crawled.
🛠 Step 3 — Fix the Relationship
I opened the Order.php model and changed it.
From ❌ (wrong)
To ✅ (correct)
Then I tested again.
Boom.
Queries dropped from 312 to 2.
Response time dropped from 6.4 seconds → 140ms.
But we didn’t stop there.
⚡ Step 4 — Add Caching (Because Users Love Speed)
Even with optimization, orders don’t change every second.
So I cached the result.
Now:
• First request = database fetch
• Next 60 seconds = lightning fast
The endpoint now averaged 40–60ms consistently.
That’s the difference between users smiling and rage-quitting.
🔄 Step 5 — Check the Node.js Service Too
The Node.js service fetched product stats.
I found a similar issue:
They were recomputing heavy analytics on every request.
So we added:
CPU dropped. Response time stabilized.
🧠 Step 6 — Final Checks
Before declaring victory, I always:
✔ Load-test with 100+ concurrent calls
✔ Monitor CPU / DB usage
✔ Enable slow-query logs
✔ Add index if needed
Performance now remained consistent.
Crisis averted.
🎯 Key Lessons Learned
This wasn’t magic. Just systematic debugging.
Here’s what mattered most:
✅ Measure before guessing
Don’t “feel” performance. Prove it with numbers.
✅ N+1 Queries kill performance
Always check relationships in Laravel.
✅ Caching solves real-world latency
But do it intentionally. Avoid stale data.
✅ Node.js also benefits from caching
Especially expensive computations.
✅ Monitor production
Logs are your best friend.
💬 Final Thoughts
Performance optimization is rarely about rewriting everything.
Most of the time, it’s about:
• fixing small mistakes
• improving database usage
• reducing redundant work
And yes, slow APIs can be fixed — without stress.