> The ChatGPT model has violated pretty much all open source licenses
Are you claiming this because they used copyrighted material as training data? If so, I think you're starting from the wrong point.
Please correct me if I'm wrong, but last I heard using copyrighted data is pretty murky waters legally and they're operating in a gray area. Additionally, I don't think many open source licenses explicitly forbid using their code as training data. The issue isn't just that most other companies don't have the resources to go up against Microsoft/OpenAI, it's that even if they did, it isn't clear whether the courts would find that Microsoft/OpenAI did anything wrong.
I'm not saying that I side with Microsoft/OpenAI in this debate, but I just don't think this is as clear cut as you're making it seem.
> Are you claiming this because they used copyrighted material as training data? If so, I think you're starting from the wrong point.
All open source license comes under copyright law. It means if they violate the OSS license, the license is void and the tech/material becomes copyright protected. So yes, it would mean that it is trained on copyrighted material.
> Additionally, I don't think many open source licenses explicitly forbid using their code as training data.
It doesn't forbid. For example, permissive license like MIT can be used to train LLM's if they are in compliance. The only requirement when you train on a MIT licensed codebase is that you need to provide attribution. It is one of the easiest license to comply. It means, you just need to copy paste the copyright notice. The below is the MIT license of Emberjs.
Copyright (c) 2011 Yehuda Katz, Tom Dale and Ember.js contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
This copyright notice needs to be somewhere in ChatGPT's website/product to be in compliance with MIT license. If it is not, MIT license is void and you are violating the license. The end result is you are training on copyrighted material. I am more than happy to be corrected if you could find me any single OSS license attribution shown somewhere for training the openai model.
Also, this can be still be fixed by adding the attribution for the code that is trained on. THIS IS MY ARGUMENT. The absolute ignorance and arrogance is their motivation and agenda.
Which is why I am asking, WHAT IS STOPPING THEM FROM VIOLATING THEIR OWN TERMS AND CONDITIONS FOR CHATGPT ENTERPRISE?
First offense could be excused as "blazing a trail and burning down the forest by accident".
But now they have a direct business contract with bigger companies that can lawyer up way better than open source foundations that live on donations and goodwill of code contributors.
Imagine they make a huge deal with Sony or Dell and either company can prove their "secure" enterprise plan was used for corporate espionage.
The legal and reputation repercussion could sink even a fortune 100 company
I thought attribution is required only if you redistribute the code. That’s why saas businesses don’t need to attribute when using open source code on their backend. Maybe a similar concept could be used for training data. I’m far from an expert so this is just a thought.
ChatGPT does redistribute the code, it's essentially the same issue as someone reading proprietary sources or GPL sources on a proprietary project, because they aren't abiding by the license they are breaking the terms. there is no possibility of clean room implementations with ChatGPT
My whole point is that I don't think that's legally true at the moment. There's enough difference in how generative AI works compared to pretty much anything before it that what ChatGPT legally does is up for debate. If a court rules that what ChatGPT does counts as redistribution then yes, I agree that they're likely violating copyright law, but AFAIK that ruling hasn't happened yet.
This is the wrong way to look at it. This comes in the same line of saying "we need a new license for AI" argument. There is nothing stopping an LLM/AI from abiding the license. The OSS license can be used by AI's or LLM's as long as they comply with the terms.
A license exist with terms. You can abide the terms and use it. It doesn't matter whether an AI, a person or an alien from a distant planet is using it. They can follow the terms. This is not a technical challenge but arrogance to abide.
Also, are you saying a model like chatgpt can do so much complex tasks and text processing but can't recognise an OSS license text of 20ish lines?
I am not sure I can agree. What is stopping them from using only permissive license and adding attribution for all the licenses in a single long page? Nothing. This is not a technical issue.
US copyright/IP management is such a shitsh*w. On one hand you can get sued by patent trolls who own the patent for 'button that turns things on' or get your video delisted for recording at a mall where some copyrighted music is playing in the background, on the other hand, you get people arguing that scraping code and websites with proprietary licences is 'fair use'
Taking this from a different perspective, let's say that ChatGPT, CodePilot, or similar service gets trained on Windows source code. Then a WINE developer uses ChatGPT or CodePilot to implement one of the methods. Is WINE then liable for including Windows proprietary source code in their codebase even if they have never seen that code.
The same would apply to any other application. What if company A uses code from company B via ChatGPT/CodePilot because company B's code was used as training data? Imagine a startup database company using Oracle's database code through use of this technology.
And if a proprietary company accidentally uses GPL code through these tools, and the GPL project can prove that use, then the proprietary company will be forced to open source their entire application.
> the proprietary company will be forced to open source their entire application
Top 1 misconception about open source licenses.
GPL doesn't mean if you use the code your entire project will become GPL.
GPL means if you use the code and your project is not GPL-compatible, you are committing copyright infringement. As if you stole proprietary code. If brought to the court, it would be resolved just like other copyright infringement cases.
Are you claiming this because they used copyrighted material as training data? If so, I think you're starting from the wrong point.
Please correct me if I'm wrong, but last I heard using copyrighted data is pretty murky waters legally and they're operating in a gray area. Additionally, I don't think many open source licenses explicitly forbid using their code as training data. The issue isn't just that most other companies don't have the resources to go up against Microsoft/OpenAI, it's that even if they did, it isn't clear whether the courts would find that Microsoft/OpenAI did anything wrong.
I'm not saying that I side with Microsoft/OpenAI in this debate, but I just don't think this is as clear cut as you're making it seem.