AI-assisted programming has accelerated prototypes, internal tools and products that used to take weeks to reach a first usable version. The problem is that this speed is also pushing code into production that looks correct, compiles, works well in a demo and still carries basic security flaws: exposed credentials, endpoints without rate limiting, weak input validation, overly detailed error messages and poorly enforced permissions.

The problem is not necessarily the AI. AI writes what it is asked to write, with the context it receives and the constraints it is given. If a developer asks it to “create an API for user registration”, they may get a functional API. But if they do not ask for login throttling, server-side validation, secure secret handling, authorization controls and security headers, the result may sit somewhere between a prototype and an open door.

So-called vibe coding has popularized a way of building software by iterating with an assistant until something works. That can be useful for exploring ideas, learning frameworks or speeding up repetitive tasks. But in production, “it works” does not mean “it is secure”. In fact, many of the most dangerous flaws do not break the application. They stay quiet until someone abuses them.

The risk is not using AI, but deploying without judgement

A common pattern is starting to appear in projects built with coding assistants. The application is created quickly, with a reasonable structure and a convincing interface. Then external services are connected: a database, a payment gateway, authentication, transactional email or cloud storage. That is where things begin to go wrong if nobody reviews how credentials, routes, roles and user inputs are protected.

A .env file accidentally pushed to GitHub can expose API keys, database credentials or cloud service tokens. GitHub provides secret protection for exactly this reason, including secret scanning and push protection, which can block pushes containing detected secrets before they are published. That protection helps, but it does not replace basic security discipline in development.

Rate limiting is another clear example. A login endpoint without attempt limits invites brute-force attacks, credential stuffing and automated abuse. OWASP recommends limiting controls for authentication, such as maximum attempts or equivalent protections against repeated attacks. This is not a sophistication reserved for banks. It is minimum hygiene for any service exposed to the Internet.

Input validation is also often underestimated. Many AI-generated prototypes validate in the frontend because that is the most visible layer, but leave the server too trusting. That is a classic bad practice. The browser belongs to the user, and everything that reaches the backend must be treated as untrusted: forms, parameters, headers, files, routes, JSON payloads, images, filenames and any data coming from clients or integrations.

Five checks before any deployment

The first check should focus on usage limits. Every public or semi-public API needs rate limiting adapted to its risk profile. Authentication routes should have stricter limits, for example five attempts every 15 minutes per IP, or per IP and username combination, returning a 429 response with a Retry-After header. Read, write, file upload and AI generation routes should also have quotas, because abuse does not always aim to break in. Sometimes it aims to exhaust resources or trigger unexpected costs.

The second check is secret scanning. The codebase must be checked for keys, tokens, passwords, connection strings, certificates and provider credentials. The .env file must be in .gitignore, but that is not enough if it has already been committed once. Git history, frontend bundles, sourcemaps, logs, error messages, build-time variables and configuration files also need review. If a key has leaked, the correct response is not simply deleting it from the repository. It must be revoked and replaced.

The third check concerns sensitive configuration. Many modern applications mix public and private variables. In frontend frameworks, it is easy to accidentally expose variables that end up packaged into JavaScript. The rule should be simple: no real secret should ever reach the client. A .env.example file can exist, but only with variable names and fake values. Production configuration should be managed through the deployment environment or a secrets manager, not through files casually shared across teams.

The fourth check is server-side validation. Every input should be validated by type, size, format and context. Oversized payloads, suspicious paths, disallowed extensions, unexpected fields and out-of-range values should be rejected. For databases, parameterized queries should be the default. For HTML or user-generated content, XSS must be controlled. For files and paths, path traversal must be prevented. And for any sensitive action, client-side validation should only improve user experience, never act as a security barrier.

The fifth check is a general security audit before deployment. Teams should review broken authentication, missing authorization on routes, exposed stack traces, vulnerable dependencies, security headers, overly permissive CORS, cookies without appropriate flags, missing HTTPS, lack of CSP or HSTS, exposed admin endpoints and logs that capture sensitive information. OWASP continues to place broken access control, cryptographic failures, injection and security misconfiguration among the core risks for web applications.

AI should also receive security prompts

The practical point is that these controls can be built into the AI workflow itself. It is not enough to ask the assistant to write a function; the assistant should be asked to think like a security reviewer before the work is considered complete. A useful prompt should not just say “build an API”, but “build an API with rate limiting, server-side validation, role-based authorization, secure error handling and abuse-case tests”.

For AI-generated projects, there should be a mandatory review phase before deployment. The assistant can help inspect routes, detect endpoints without authorization, search for secrets, propose limits, review dependencies and generate tests. But the developer has to know what to ask and verify the answer. Delegating without understanding is a fast way to create security debt.

Final responsibility remains human. If an application exposes a Stripe key, allows users to access other people’s data or accepts malicious payloads, the affected customer will not care that part of the code was written by a model. AI can accelerate development, but it does not sign off the deployment. The team does.

The good news is that basic security does not need to wait for an annual audit. It can be integrated into every pull request, every pipeline and every AI-assisted coding session. Secret scanning, dependency analysis, authorization tests, input validation, security header checks and usage limits should be part of the definition of “done”.

Programming with AI should not lower the bar. It should raise it. If an assistant makes it possible to write code faster, it should also be used to review more, test better and document risks before the software reaches production. The problem is not that AI generates insecure code. The problem is treating generated code as production-ready just because it appears to work.

Frequently asked questions

Is it dangerous to code with AI?
Not by itself. The risk appears when AI-generated or AI-assisted code is deployed without security review, testing and human oversight.

What is rate limiting and why does it matter?
Rate limiting restricts the number of requests allowed within a given period. It helps reduce brute-force attacks, automated abuse, aggressive scraping and excessive resource consumption.

What should I do if I pushed a .env file to GitHub?
Deleting it is not enough. You should revoke the exposed credentials, generate new keys, review the repository history and enable secret detection and push protection tools.

What should a developer review before deploying AI-generated code?
Rate limiting, secrets, environment variables, server-side validation, authorization, dependencies, security headers, exposed errors and data access permissions.

Scroll to Top