Interesting Links:
https://www.courthousenews.com/microsoft-and-github-ask-court-to-scrap-lawsuit-over-ai-powered-copilot/
https://techcrunch.com/2023/01/27/the-current-legal-cases-against-generative-ai-are-just-the-beginning/?guccounter=1
https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data
Because this is a problem not with AI itself, but with people on the boards of corporations. People at the top of corporations are very greedy for money and power (it is obvious that they want to make money at all costs, i.e. preferably from other people's work for which they are not paid). These first lawsuits are just the beginning and they are right. There would be no such lawsuits if the created AI tools, along with the answers, provided information about the sources of their knowledge. Unfortunately, this area of human life (i.e. the use of AI) has not yet been legally regulated (because it is still too new).
And probably the "buried dog" is here. Because if such a "smart AI chat" provides information about how it knows what it returns, then the charm of "smart artificial intelligence" is broken. In other words, an ordinary person, fascinated by AI, is amazed at how "smart" this AI is (and how smart these corporations are to have invented such smart AI). However, if such a "smart AI" also provides sources in addition to the relevant information, the average person is no longer dazzled by AI's wisdom. Instead, such a person has the impression that he is simply using a slightly better and more extensive "search engine" (which is true). But then there will no longer be the "wow" effect that people from corporate management probably expect.
However, current AI is not a breakthrough. As AI skeptics (of which I am one) rightly noticed here, it is only and exclusively extensive software that searches and transforms existing data (created by humans) according to a set of given parameters. This software does not create anything on its own - it creates absolutely nothing, contrary to what some people may think. And there is no chance that anything will change in the near future. This current third wave, like the previous two, will reach the technology wall at some point. And for some time, people will again abandon interest in AI (as before). And when, after some time, technological barriers are overcome, some people will slowly start to come back to the topic of AI.
Also, viewing AI as a threat is a bit exaggerated (e.g. SkyNet). A much bigger threat is the decision made by humans to entrust certain areas of human life to AI without human control (supervision). Can you blame a device for causing harm to someone? In such situations, the designer or manufacturer of the device is rather blamed (because he did not take care of something at the design or production stage).
In my opinion, the reason AI is currently popular among people is:
1. people's lack of general knowledge about it as AI is designed and how it works,
2. expecting that some mysterious but magical force will do something complicated for them (which is beyond their capabilities) in an automagical way.
This is a classic case of people's lack of knowledge (and often reluctance to acquire it), combined with naivety and the desire to quickly obtain the desired results. Humans, as a species, have always been like this (to be precise: a large part of the population). And the development of humanity progressed mainly thanks to those few individuals who "wanted to learn something and do something." A huge number of people have always wanted to "take shortcuts", preferring to believe in miraculous solutions because they do not require effort (mental, physical) and a lot of time. And this is what people were promised by shamans, priests of various religions, rabbis, imams, commanders (duce, fuehrer), party leaders (e.g. communists in the USSR, PRC, etc.), charlatans, and today it is done by marketing specialists employed in corporations.
Returning to the main topic, i.e. FPC and AI. Yes, AI can be useful for performing certain tedious but repetitive activities. Unfortunately, it will not design new, fancy and complicated classes, procedures, algorithms, GUIs, etc. for the programmer. It will not do this because it is not creative - it is only reproductive, i.e., according to given patterns, it creates variations on existing topics, those it finds in its database . AI is just incredibly overrated (a bit like Python or Bitcoin). It's probably quite well suited to formulaic and repetitive work, such as website development. It can also be useful as an aid when searching for documentation or suggesting solutions to AI problems (if it finds them in its database). And that's it for now. Perhaps in some time it will be possible to have it generate code based on some schema and a set of required parameters, provided that the results are not full of errors.