Since many people want AI in Lazarus, I asked AI what she thinks about it;
While artificial intelligence (AI) tools are revolutionizing code generation in modern programming environments, using them for software development in Lazarus—especially in a commercial context—raises a number of serious legal, copyright, and confidentiality issues.
Copyright and Source Code Ownership
AI code generators are typically trained on huge datasets from public and private code repositories. They can output code snippets that are identical or highly similar to copyrighted or licensed code, often without necessary attribution or license compliance. This creates a real risk of unintentionally violating licensing terms—such as GPL or other copyleft licenses—which might legally obligate the company to disclose its own proprietary source code or restrict how it can distribute software. Furthermore, works generated primarily by AI may not qualify for copyright protection at all if they lack significant human creative input, leaving them exposed to free use by third parties.
Confidentiality & Trade Secrets
Many AI tools process prompts in the cloud. If proprietary source code or confidential algorithms are submitted to an AI assistant, they may leak into external systems or even the AI’s training data, resulting in trade secret loss and competitive disadvantages. This risk is particularly acute for companies working with closed-source or internal software in Lazarus/FPC, where source code secrecy is critical to product strategy.
Legal Uncertainty & Attribution
The laws governing AI-generated code are unsettled and vary regionally. Litigation is ongoing about how much AI-generated code may breach open-source licenses, how much similarity triggers obligations, and whether copyright even applies. Most AI tools also do not reliably track or provide clear sources for their code suggestions, making proper attribution and license compliance difficult for developers.
In summary: Integrating AI-powered coding tools into Lazarus programming can expose your company to potentially severe legal, copyright, and confidentiality risks, especially when handling proprietary source code. Strict review processes, clear company policies, and cautious contracts are mandatory, and in many cases, it is safer not to use AI at all for core software development in Lazarus environments.
Furthermore, AI can never create anything new, but only combines what is already known according to statistical algorithms, which it then feeds back into itself. The result is never excellence, but in the long term less than mediocrity. AI is trained to give answers that are superficially satisfactory, but does not go into depth without the questioner insisting.
I find that the solution lies in routine research, even without AI.
Therefor i asked AI whether AI is a thief -- here is the first answer without arguing that AI seperates itself from the responsibility
AI itself is not a "thief" in the legal sense because it is a tool, not a conscious actor. However, AI can engage in data misuse or data theft if it collects, copies, or processes sensitive or proprietary data without proper consent or authorization. This unauthorized use of data fits the definition of data theft, which involves the unlawful acquisition or use of protected information without permission. So, while AI is not morally or legally equivalent to a thief, its misuse can lead to theft-like consequences and legal violations.