AI already writes code, proposes patches, generates tests, reviews pull requests and can scan entire repositories for bugs. For system administrators and developers, this is no longer a future promise: it is a working tool that is already entering terminals, IDEs, pipelines and collaboration platforms. The important question is no longer whether it can be used, but under what limits.

The answer starts with an uncomfortable but necessary rule: code can be AI-assisted, but responsibility cannot be automated. An agent can suggest a function that looks flawless, prepare a migration or fix a concurrency bug, but it cannot assume legal, operational or technical responsibility for what goes into production. If something breaks a service at three in the morning, the model does not answer for it. The person, the team and the organisation that accepted it do.

The Linux kernel sets a useful standard for everyone

The Linux kernel documentation has drawn a clear line for the use of AI coding assistants in code contributions. Agents cannot add Signed-off-by tags, because only a person can certify the Developer Certificate of Origin. The human contributor must review the AI-generated code, check licence compatibility, add their own signature and take full responsibility for the contribution.

The same guidance introduces an Assisted-by tag to disclose the use of AI tools. This is not a cosmetic detail. It is a way of telling the truth in the project history: this change was reviewed and signed by a person, but it involved assistance from a specific tool. For internal projects, enterprise repositories and critical software, that distinction should become standard practice.

A good record could include which agent was used, which model version, at what stage it was involved and who reviewed the result. There is no need to turn every commit into a bureaucratic dossier, but it is important to avoid AI-generated code appearing as if it had been fully written and reasoned through by a human. Traceability is not the enemy of productivity. It is what makes later investigation possible.

For DevOps, SRE and platform teams, this rule has a practical consequence: agents should not have permissions equivalent to a human maintainer. They can open a branch, propose a patch or generate a report, but merge, signing, deployment and changes to critical infrastructure should still go through human and automated controls.

“Vibe coding” also scales technical debt

The problem with AI applied to development is not that it always writes bad code. The problem is that it writes with great confidence even when it is wrong. It can generate code that compiles, passes superficial tests and follows the repository style, while still introducing incomplete validation, an unnecessary dependency, poor error handling or a vulnerable path that only appears under real load.

Veracode analysed more than 100 language models in code generation tasks and concluded that 45% of the samples introduced known security flaws. In another summary of the research, the company noted that these flaws included OWASP Top 10 vulnerabilities, with especially high rates in some languages and task types.

For a senior developer, that figure should not lead to banning AI. It should lead to treating it like any other untrusted input. AI-generated code must go through review, tests, static analysis, dependency analysis, secret detection and security validation. If it affects authentication, authorisation, cryptography, parsing, user input, deployments, infrastructure or sensitive data, review should be even stricter.

The risk multiplies when agents operate across large repositories. A small change can have side effects on deployment scripts, IAM policies, database migrations, CI/CD jobs or infrastructure-as-code templates. That is why the standard cannot be “if it compiles, it is fine”. The right questions are what surface it touches, what permissions it needs and what tests prove that it does not break anything important.

Least privilege is no longer enough: we need least agency

System administrators know the principle of least privilege well. A user or service should have only the permissions strictly needed to do its job. With AI agents, an extra layer is needed: least agency. The agent should not be able to do everything it is technically capable of doing, but only what the task requires.

An agent that reviews logs does not need credentials to modify production. An agent that generates a patch does not need to publish a release. An agent that analyses Terraform should not apply changes without approval. An agent that proposes dependency updates should not modify secrets, pipelines or network policies.

This separation must be reflected in real permissions: short-lived tokens, specific service accounts, limited repositories, isolated environments, sandboxed execution, read-only access by default and clear policies for escalating actions. Every relevant operation should also be logged: files read, commands executed, changes proposed, dependencies added and decisions made by the person reviewing the work.

Governance should not live in a PDF separated from the technical workflow. It should live in Git, CI/CD, the ticketing system, branch policies, audit logs and security tools. If an agent touches code or infrastructure, its activity must be verifiable.

The supply chain shows where implicit trust fails

Recent security incidents teach the same lesson from another angle. In April 2026, CPUID’s website was compromised for a short period and served manipulated installers for tools such as CPU-Z and HWMonitor. The packages used DLL sideloading with a CRYPTBASE.dll file to deploy malware, according to security reporting.

The case is useful for sysadmins because it shows that a signature or long-standing trust in a provider is not enough. A binary may look legitimate, a site may be official and a download may come from a known URL, yet the distribution chain may still be compromised. Verification must include hashes, trusted repositories, allowlists, EDR, execution control, behavioural analysis and response procedures.

Also in April 2026, Apple fixed CVE-2026-28950, a vulnerability in Notification Services that could cause notifications marked for deletion to be unexpectedly retained on the device. The flaw was linked to forensic scenarios in which Signal notification content could be recovered even after the app had been deleted.

The lesson for administrators and developers is clear: an application can encrypt properly, a binary can be signed and an AI assistant can generate code that appears correct, but the complete system fails at its weakest layer. Real security depends on how the operating system, permissions, local storage, logs, notifications, dependencies and distribution are combined.

A practical model for technical teams

AI governance in development and operations should start with simple rules. All AI-assisted code should be disclosed when relevant. Every change must have a responsible person. Agents must operate with limited permissions. Critical actions must require human approval. New dependencies must be validated. Secrets should not be available to agents unless there is a justified need. Generated outputs should go through the same controls as human-written code, not fewer.

In pipelines, it is useful to separate three phases: assistance, verification and promotion. AI can assist with generation, review or diagnosis. Verification should rely on deterministic tools wherever possible: tests, SAST, DAST, SBOMs, container scanning, licence checks, IaC analysis and compliance policies. Promotion to higher environments must remain under the control of the responsible team.

For production environments, the use of agents must come with observability. It is not enough to know that a change was deployed. Teams need to know who approved it, what the AI generated, which checks passed, which metrics changed afterwards and how to roll it back. AI can accelerate the development cycle, but it can also accelerate the spread of errors if there are no technical brakes.

The conclusion is not defensive. AI can be very useful for sysadmins and developers: it helps read legacy code, explain logs, generate playbooks, write tests, detect patterns, prepare scripts and document systems. But the more it enters real workflows, the more important it becomes to keep a clear boundary between assistance and responsibility.

The human remains in control not out of nostalgia, but by operational design. Someone must sign, review, respond and put out the fire if it comes. AI can accompany the on-call shift, but it cannot carry the legal pager for production.

Frequently asked questions

Can an AI sign commits or patches in serious projects?
It should not. In the Linux kernel, AI agents cannot use Signed-off-by; a person must review, certify and take responsibility for the change.

What does Assisted-by mean in a commit?
It indicates that a tool helped create or review a contribution. It provides transparency without replacing human responsibility.

What permissions should an AI agent have in a repository?
The minimum required. The safest approach is to start with read access or proposals in separate branches, without direct permissions to merge, modify secrets, change CI/CD or deploy to production.

How should AI-generated code be reviewed?
Like untrusted code: human review, tests, static analysis, dependency scanning, licence checks, secret detection and specific validation if it touches security, infrastructure or sensitive data.

Scroll to Top