In the world of AI, the concept of open-source systems has become a hot topic of debate. According to a recent group, an open-source AI system should be available for any purpose without the need for permission. Researchers should have the right to inspect its components, study how it works, and even modify it to change its output. Sharing such a system with others, with or without modifications, should also be allowed for any purpose.
This new standard not only promotes transparency but also aims to define a clear level of openness when it comes to a model’s training data, source code, and weights.
Previously, the absence of a defined open-source standard created confusion in the industry. While some companies like OpenAI and Anthropic choose to keep their AI models, data sets, and algorithms secret (making them closed source), others like Meta and Google offer freely accessible models. However, even these models come with restrictions that limit users’ ability to fully inspect and adapt them, leading to debates about their true openness.
Avijit Ghosh, an applied policy researcher at Hugging Face, points out the misuse of the term “open source” by companies when marketing their AI models. This misleading branding could potentially create a false sense of trustworthiness, as researchers may not have the freedom to verify the models’ true openness independently.