Recent

Author Topic: Using chatgpt to bring the pascal/lazarus platform up to date more quickly.  (Read 5510 times)

dbannon

  • Hero Member
  • *****
  • Posts: 2796
    • tomboy-ng, a rewrite of the classic Tomboy
Because it has no legal standing it is unable to utter a license or make any statement which would be admissible in a dispute.
[/quote]
Many of the evils of this world can be traced back to a decision to grant to (eg) companies much of the legal rights once reserved for a person. In many ways, an AI possibly has a better case to be granted those same legal rights.

Sadly, Isaac Asimov's Three Laws do not look like becoming a prerequisite.

Davo
Lazarus 3, Linux (and reluctantly Win10/11, OSX Monterey)
My Project - https://github.com/tomboy-notes/tomboy-ng and my github - https://github.com/davidbannon

Webdrifter

  • Jr. Member
  • **
  • Posts: 68
Many of the evils of this world can be traced back to a decision to grant to (eg) companies much of the legal rights once reserved for a person. In many ways, an AI possibly has a better case to be granted those same legal rights.

Sadly, Isaac Asimov's Three Laws do not look like becoming a prerequisite.

Davo

That is off course an entirely different discussion which should be started in another thread.
And to help you with that start I will give you one comment concerning this item.

Asimov's laws are flawed !!!

His first law states: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

So now how would a specific robot define "injure" and "harm"!
Is intimidation a form of injury or harm, and what about the use off force without bloodshed?
And what will he have to decide when he has to kill somebody to prevent two (or more?) other persons to be killed?
Do different rules apply to police or battlefield robots?
How can we prevent battlefield robots from being programmed in ways we don't want?
How can we prevent the programs of battlefield robots from being used in non-battlefield situations?

So Asimov's laws are crude guidelines at best.
And dangerous, because they create a false illusion about the manageability of the risks that AI entails!!!

I predict that robots will first become a tool in the hands off the powerful, and next will become a power themselves....

« Last Edit: January 16, 2023, 04:25:55 pm by Webdrifter »

marcov

  • Administrator
  • Hero Member
  • *
  • Posts: 11452
  • FPC developer.
IIRC Asimov says (in one of the Susan Calvin novels) that the words of the law are a rough approximation of positronic circuits only.

MarkMLl

  • Hero Member
  • *****
  • Posts: 6686
And in any case, the whole point of the stories- or at least the original ones- was the things that didn't work quite as people expected.

MarkMLl

MT+86 & Turbo Pascal v1 on CCP/M-86, multitasking with LAN & graphics in 128Kb.
Pet hate: people who boast about the size and sophistication of their computer.
GitHub repositories: https://github.com/MarkMLl?tab=repositories

440bx

  • Hero Member
  • *****
  • Posts: 4029
I figured I'd post this here instead of starting a new thread ...

A.I seems to be getting a lot of attention these days and while it seems to be able to fake intelligence to some extent - it can certainly glue words together - the semantic value of the result is often "questionable".

One such example of rather questionable results occurred when Ann Reardon (dietitian and baker) asked ChatGPT for a cake recipe.  She attempted to make the "A.I cake", the process was recorded in the video at https://www.youtube.com/watch?v=nUqPOsgu0uo

Personally, I found the whole thing quite amusing and quite indicative of A.I's _true_ capabilities.

IMO, that baby still has a very long way to go.

(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

Martin_fr

  • Administrator
  • Hero Member
  • *
  • Posts: 9867
  • Debugger - SynEdit - and more
    • wiki
Well....

About the licensing. Even if we could safely determine if any produced code by a given AI is free of "copied" material (either verbatim, or with negligible differences) or the source of the copy would permit use under the licenses that would be required ..... That would not be the end of it.

I guess, but do not know, that an AI under current law in probably all countries of the world can not itself hold legal rights. But that may change. Just to say, if a tree can have the right of owning itself: https://en.wikipedia.org/wiki/Tree_That_Owns_Itself ....

But then an AI is in the end a machine. And it may well be possible that the owners of a machine hold rights on the product of that machine. And those rights would be hold automatically, until explicitly give up or transferred....

Mind, all that needs to be checked for any country in the world.




But even then, lets for arguments sake ignore the above.

@Webdrifter Why don't you try how far an AI would actually get?

You say it can do debugging. Well then, we have a bugtracker full of bugs. Get the AI to tell you what is wrong. That is which exact line of code does the wrong thing.
It doesn't need to give the corrected line of code (or you do not need to forward the correction).

If we know which line is wrong (and in which way it is wrong), then for many bugs it would be a huge steps toward fixing them.
So all you need is add the info "which line is wrong, and why that is" .


Or pick a feature that you miss. And see if the AI really generates stable working code. (Because if it aint stable, it may be easier to do it from scratch than to fix some half baked code).




Because, if you don't succeed in getting good result for some bigger tasks, well then it really doesn't matter what the license would be.

Afaik most of the examples for AI generating code are either
- trivial / standard code blocks, just rewritten to use identifiers matching a give description
- small tasks, for which the human had to give very very precise description (in human language), and often had to go through several iterations before resulting in something useful. But if the human has to describe it that detailed, then it's just a translation, and for a programmer may be easier to describe it in a programming language right to start with.

Of course I could have missed some news. After all there is a huge flood of news going on... (Maybe I could filter those news using an AI... ;) )


 

Чебурашка

  • Hero Member
  • *****
  • Posts: 568
  • СЛАВА УКРАЇНІ! / Slava Ukraïni!
I was asking OpenAI chat about licensing.
This is snippet:
Code: Text  [Select][+][-]
  1. Me: I am more interested in licensing regarding the code you produce. Can I use code you
  2. generate without legal consequences?
  3.  
  4. Answer:
  5. As a language model, I am not capable of having a legal standing or holding a license, any code I produce is
  6. intended to be used as an example and for educational or informational purposes. However, it's important to
  7. be aware that code generated by me, like any other code, may be subject to intellectual property rights.
  8.  
  9. It's ultimately up to the owner of the rights to determine how the code can be used and distributed. So it is
  10. important to be cautious and get permission before using any code generated by me or any other third party
  11. if it is intended to be used for commercial purpose. Additionally, I suggest you double check the code and
  12. adapt it to fit your needs and verify that it does not infringe any other rights.
  13.  
  14. I always include a LGPL license as an example of open-source license but that doesn't mean the code is
  15. automatically licensed under LGPL, you should verify if the code can be used under that license or any other.
  16. It's also a good practice to always check licenses of any open-source libraries or code you use to ensure that
  17. you are using them legally and in compliance with the license terms.

While reading this I started wondering: from now on how can anyone be sure that every contribution made to fpc (or any other community software) is genuine, and is not originating from an AI, that, in turn could have generated using copyrighted code?

FPC 3.2.0/Lazarus 2.0.10+dfsg-4+b2 on Debian 11.5
FPC 3.2.2/Lazarus 2.2.0 on Windows 10 Pro 21H2

440bx

  • Hero Member
  • *****
  • Posts: 4029
<snip> ... from now on how can anyone be sure that every contribution made to fpc (or any other community software) is genuine, and is not originating from an AI, that, in turn could have generated using copyrighted code?
The problem of the code being _original_ pre-dates A.I.  This is why projects that clone the functionality of some piece of software are done in "clean rooms".

What makes A.I very deficient in any technical field (even baking) is that it has no ability to _analyze_, therefore create even a semblance of consistency.
(FPC v3.0.4 and Lazarus 1.8.2) or (FPC v3.2.2 and Lazarus v3.2) on Windows 7 SP1 64bit.

lainz

  • Hero Member
  • *****
  • Posts: 4468
    • https://lainz.github.io/
chatgpt is not that bad, it works sometimes and sometimes not, like ourselves with make bugs in software as well.

dbannon

  • Hero Member
  • *****
  • Posts: 2796
    • tomboy-ng, a rewrite of the classic Tomboy
WRT bugs, Iainz, sure, my first cut of code has bugs, I then find and fix the bugs. Thats easy because, having just written it, its clear in my head what it is supposed to do and how it does it. If you were debugging my code, or I was a year after writing it, harder. So, you or me, debugging the AI's code is always going to be harder.

WRT License, thats a lot harder. The statement Чебурашка posted does, IMHO, give fair warning to anyone using AI generated code, "someone", "sometime" may knock on their door and say "you are using my code".  Maybe we need a tickbox on every bug report, every pull request, "No AI code or other unclear license included" ?

Remember Unisys ?

Davo
Lazarus 3, Linux (and reluctantly Win10/11, OSX Monterey)
My Project - https://github.com/tomboy-notes/tomboy-ng and my github - https://github.com/davidbannon

Чебурашка

  • Hero Member
  • *****
  • Posts: 568
  • СЛАВА УКРАЇНІ! / Slava Ukraïni!
The problem of the code being _original_ pre-dates A.I.

True,
the difference is that before it had to be someone searching/reading/understanding and then copying in a conscious way, now this can be done with the speed of a computer, despite it does "understand" in a different way than us. My feeling is that now such problem is much more amplified than before.

Maybe we need a tickbox on every bug report, every pull request, "No AI code or other unclear license included" ?

Likewise, I was thinking that one possibile idea is that each contribution should come with a "statement from the author" that declares the originality/ownership on the piece of code submitted.
« Last Edit: April 25, 2023, 08:27:29 am by Чебурашка »
FPC 3.2.0/Lazarus 2.0.10+dfsg-4+b2 on Debian 11.5
FPC 3.2.2/Lazarus 2.2.0 on Windows 10 Pro 21H2

marcov

  • Administrator
  • Hero Member
  • *
  • Posts: 11452
  • FPC developer.

Likewise, I was thinking that one possibile idea is that each contribution should come with a "statement from the author" that declares the originality/ownership on the piece of code submitted.

That is like saying on the bug report site, "please only submit original code that you own" on the bugtracker.   Without tracing that the owner is a legal valid entity that is pretty much useless.

Blaazen

  • Hero Member
  • *****
  • Posts: 3237
  • POKE 54296,15
    • Eye-Candy Controls
Looks like chatbot joined the thread.  ;D
Lazarus 2.3.0 (rev main-2_3-2863...) FPC 3.3.1 x86_64-linux-qt Chakra, Qt 4.8.7/5.13.2, Plasma 5.17.3
Lazarus 1.8.2 r57369 FPC 3.0.4 i386-win32-win32/win64 Wine 3.21

Try Eye-Candy Controls: https://sourceforge.net/projects/eccontrols/files/

TRon

  • Hero Member
  • *****
  • Posts: 2514
IMO, that baby still has a very long way to go.

Although I have not seen the video you linked to, I did watch some coding related video's that uses chatgpt and in those video's is claimed to be the next best thing to breathing.

What makes A.I very deficient in any technical field (even baking) is that it has no ability to _analyze_, therefore create even a semblance of consistency.
Exactly that.

I find the (holier than holy) claims made very funny because if you throw it real/actual (complicated or even simple) problems to it that it has no notion of then it becomes quite clear that chatgpt is just a overglorified guessing machine. See also https://www.youtube.com/watch?v=flXrLGPY3SU

That someone in his/her right mind even considers it being used in a real (working) environment is imho where the real danger lies, especially when you are aware that everything you feed it is remembered/stored (for eternity ?). Not to mention the bugs it introduces (you only know that because you know the discussed topic at detailed level else you will not be aware).

The ai is stubborn as hell as well. Even when you tell it to use another solution (which it does not seem to understand) it tries that, in the process doing it wrong (can happen), you correct it with being explicit (present the literal solution) and as a response the ai decides to take a complete other approach (which you did not ask for).

Especially the need to be very explicit when it comes to "complicated" problems is /very/ tiresome. If I need to write all that down then I could just as well spend/waste that time writing the code myself.

I am aware that using an ai takes time to learn but if you have to be so explicit as to literally feeding it the solution and it still does not incorporate that into the answer then I would say it is far from usable and I would stay the hell away from it.

And that is without touching the subject of copyright as that is a whole other can of worms.

Having said that, it is a perfect tool for learning about new topics as the example code that it produces seems to be of reasonable quality. It is even capable of understanding new topics (free pascal, new classes, new/uncommon targets). It also seems capable combining example code to a working example and explain in detail what the code is doing. Same its true for letting it do boring tasks like creating test data setting up basic cradle code, sort data, convert data from one format to another etc. In that regards it can be a very helpful tool.
« Last Edit: April 25, 2023, 04:26:24 pm by TRon »

lainz

  • Hero Member
  • *****
  • Posts: 4468
    • https://lainz.github.io/
This week I was finishing an API with node.js and loopback.

I used chatgpt to make one of the most difficult endpoints, and didn't work. I copy-pasted the code and it gives errors about classes. Of course an ORM like loopback is hard, and hard to make relations from one table to others. In that point chatgpt failed.

Then I did another approach, using SQL, and joining the data with 'map' and in that way chatgpt works better, giving me queries or updating the queries with my specifications.

So is bad sometimes and good sometimes.

Generalizing this as a bad tool is in fact not accurate.

 

TinyPortal © 2005-2018