Teaching Your App to Talk – Why Monitoring Matters
Have you ever heard a user say:
“Module X in my app doesn’t work,”
and your only response was:
“What do you mean? What do you see on the screen? How long has this error been happening?”
That’s a sign that your architecture isn’t talkative. And it should be.
Today, I’ll try to explain why monitoring should be an integral part of the application design process - not a sad obligation at the very end.
Architecture that doesn’t hide
When writing code, we often focus on what’s visible: components, styles, layout.
But the real power (and fragility) lies in the connections between the frontend and the data sources feeding it.
We (sometimes) understand that the purpose of our platform/app/site/store is to generate revenue, and because of that, monitoring often doesn’t get the attention it deserves - after all it’s the invisible part of our work.
If we’re lucky, those shortcomings will never catch up with us.
But much more likely, every developer who ignores monitoring will one day receive that unpleasant call from a client saying something’s wrong.
Teach your application to speak
The app you’re building or maintaining can start talking to you.
It won’t, however, speak a language you can easily understand.
So, you need to take steps that will allow you to communicate with it effectively.
import { NextResponse } from "next/server";
export async function GET() {
const URI = "https://httpbin.org/status/403";
try {
// Fetching data from an external source (this one always returns 403)
const res = await fetch(URI);
// If the external response is not OK, return that status to the frontend
if (!res.ok) {
return NextResponse.json(
{ error: "Failed to fetch users" },
{ status: res.status }
);
}
// If we got a response, parse it to JSON...
const data = await res.json();
// ...and return it to the frontend
return NextResponse.json({ users: data }, { status: 200 });
} catch (error) {
// If something else went wrong, return 500
return NextResponse.json(
{ error: "Internal Server Error" },
{ status: 500 }
);
}
}Example of a simple API in Next.js (v15) that can’t talk
The above code looks correct, but it doesn’t tell the developer where things go wrong. You’ll only find out from a user who can’t view part of the app or repeatedly gets an error. A simple change can notify the developer that something’s wrong before the first user call. In this case we protect ourselves against the domino effect - we can find out before the first phone call that the service our code depends on is having trouble handling the request. The improved code might look like this:
import { NextResponse } from "next/server";
export async function GET() {
const URI = "https://httpbin.org/status/403";
try {
// Fetching data from an external source (this one always returns 403)
const res = await fetch(URI);
// If the external response is not OK, return that status to the frontend
if (!res.ok) {
+ console.error(
+ `Server encountered an error trying to fetch ${URI}. Encountered status code: ${res.status}`
+ );
return NextResponse.json(
{ error: "Failed to fetch users" },
{ status: res.status }
);
}
// If we got a response, parse it to JSON...
const data = await res.json();
// ...and return it to the frontend
return NextResponse.json({ users: data }, { status: 200 });
} catch (error) {
+ console.error(
+ `Server encountered an unidentifiable error. Responded with 500`
+ );
// If something else went wrong, return 500
return NextResponse.json(
{ error: "Internal Server Error" },
{ status: 500 }
);
}
}Example of the simplest logging
These console logs help detect where errors occur - but they’re not the best way to log information.
We can read the messages, but we lack context, filtering, and structure. Therefore, a better solution is to log errors in JSON format. This allows us to structure the logs and add additional fields to make them easier to search.
import { NextResponse } from "next/server";
export async function GET(request: Request) {
const URI = "https://httpbin.org/status/403";
try {
// Fetch data from an external API
const res = await fetch(URI);
// Handle bad responses
if (!res.ok) {
+ console.error({
+ message: "error while trying to fetch ResourceName",
+ path: URI,
+ requestedPath: request.url,
+ returnedCode: res.status,
+ timestamp: Date.now(),
+ });
return NextResponse.json(
{ error: "Failed to fetch users" },
{ status: res.status }
);
}
// Parse the response
const data = await res.json();
// Return the fetched data
return NextResponse.json({ users: data }, { status: 200 });
} catch (error) {
// Handle unexpected errors
+ console.error({
+ message: "error while trying to fetch ResourceName",
+ requestedPath: request.url,
+ timestamp: Date.now(),
+ catchError: error
+ });
return NextResponse.json(
{ error: "Internal Server Error" },
{ status: 500 }
);
}
}Example of better JSON-structured error logging
Although the idea of logging every request and response where something went wrong is tempting - and would certainly make it easier to pinpoint issues quickly - it’s often not feasible due to the cost and the complexity of handling such massive data volumes.
It’s also important to remember that logging itself consumes resources that your application could otherwise use for efficient operation.
You should also be cautious about logging sensitive user data such as tokens, cookies, IDs, or personal information.
Above example will return error as such JSON object:
{
message: 'error while trying to fetch ResourceName',
path: 'https://httpbin.org/status/403',
requestedPath: 'https://localhost:3000/api/example',
returnedCode: 403,
timestamp: 1761086017092
}Where to learn more
If you take logging seriously and want your app to express its state clearly, consider using a standard format like ECS (Elastic Common Schema).

You also don’t have to implement logging from scratch - many great libraries do this for you.
I personally recommend Pino:
Most loggers also have plugins or packages for ECS formatting:

Learn to listen to your application
Now that your application can speak, you need to learn how to listen to it - and not just listen, but listen with understanding (more on that later).
The most common way to “listen” to an application is to send its logs to aggregation services such as Datadog or Logrocket. However, due to costs - or when dealing with a still-growing, early-stage app - it’s often better to choose something you can manage yourself and that isn’t limited by a free-tier plan.
From my experience, I can recommend setting up a time-series database such as InfluxDB or Loki.
If you have a bit more available infrastructure, you can go a step further and deploy the ELK (Elasticsearch, Logstash, Kibana - three applications maintained by Elastic).
Once you have a configured destination that can receive your logs, you still need a way to send them there. There are many possible approaches.
For example, in the case of InfluxDB, you can:
- use a pino plugin to send data directly,
- connect to Influx from within your application,
- create a custom microservice that receives logs (e.g., via POST requests) and writes them to the database,
- or use a tool such as Telegraf to handle the data transfer.
When choosing the right log storage solution, you should consider its performance, community support, and available features.
If you have plenty of available resources, I recommend deploying the ELK stack, which likely provides everything you need:
- Elasticsearch is an open-source engine for storing, indexing, and aggregating time-series data.
- Logstash is an application that can ingest logs from a wide variety of sources and forward them - enriched with additional metadata about their origin - to a destination of your choice (in our case, Elasticsearch).
- Kibana is an application that allows you to conveniently query Elasticsearch without writing complex ES|QL queries. It also enables the creation of dashboards based on your data, which can beautifully visualize the stored logs.
Elasticsearch requires proper indexing to aggregate data efficiently, which means that the fields we send to it must always have consistent types.
If even a single log contains, for example, an object in the
error field instead of a string, it will break the ability to search logs by that specific field until a new index is created (e.g., through a rollover process).What's next?
When your application is being used by users, problems are bound to occur - someone might attempt an unauthorized action, a session token might expire in the middle of a request, or AWS might misconfigure DNS again. You’ll learn about these issues not from the users, but from a reliable source of your own.
This allows you to react faster and more confidently because you’ll know why the problem occurred without needing to extract that information from the user.
The next step is to set up proper alerting. You know that your application will occasionally log an error - this is natural, especially in larger systems. But when instead of one error per hour you start seeing 1,000 errors per minute, it’s a clear sign that something is seriously wrong.
You won’t be able to be constantly available, staring at your monitor to check if everything in the application is working correctly. That’s why it’s worth setting up proper notifications. There are many ways to do this, but the simplest approach (when using the ELK stack) is to create alerts directly in Kibana. You can configure alerts to be sent to various destinations, such as email, Microsoft Teams, or Slack.
Over time, logs and charts can become easy to overlook, and you may stop noticing inconsistencies or warning signs. It’s worth conducting regular audits - let another developer review them. They will likely have valuable feedback; listen carefully, as some of it will be accurate and can help improve data readability or make the logs more precise.
Analyze incidents and check whether they could have been prevented through alerting. Make sure this review process is not a one-time event, but occurs cyclically.
Conclusion
Code and monitoring are not two separate worlds. Good architecture produces high-quality data, and good monitoring allows you to understand how that architecture performs in the real, often harsh, world.
Stop building mute applications. Let’s start building systems that are transparent, communicative, and self-explanatory. This is not an extra feature - it is the standard of modern, professional development.

