By Nadja Hemingway and Daniel Ampuero Anca
Hyperscience is rooted in Artificial Intelligence and Machine Learning. Given AI is part of our DNA, it is only natural that we think about the ethical concerns related to its implications and make them a high priority. With that in mind, we launched our ethics committee (now in its second year) to focus on AI and Ethics. Recently, the committee held a meaningful and impactful event in New York on the 88th floor of the One World Trade Center, where Hyperscience is headquartered.
Let’s dig into the details.
Middle Schoolers Dive into AI Ethics
We received an email recently asking if students attending Eastern Middle School in Silver Spring, Maryland could come to the Hyperscience headquarters in New York City to film a documentary focused on the “ethical use of AI,” which will be aired at the American Film Institute in May 2025 as part of their Humanities and Communication Magnet Program. We were intrigued, excited, and wanted to get to know these young minds and understand how they think about AI, ethics, and the future of this technology.
So, we decided to host a panel discussion with these students and members of our AI Ethics Committee and ask these students a few questions that would help us in our mission, which is to raise and discuss ethical concerns regarding AI implications on our daily life and by extension, the Hyperscience platform.
Discussions and Debates – Can AI Control Us?
The students were enthusiastic about the potential of AI, but they also had real concerns about its future capabilities. One student voiced a common fear, “Can it go crazy and try to kill us all?” While this might sound like a question from a sci-fi thriller, it underscores a very real concern raised by some of the world’s leading AI researchers, such as Geoffrey Hinton, Yoshua Bengio, and Stuart Russell. As AI becomes more sophisticated, the risk of misalignment between AI’s goals and human values grows. Could Artificial Super Intelligence (ASI)pursue objectives that conflict with humanity’s best interests? It’s a scenario that researchers worry could have disastrous consequences.
In a world where AI is poised to take over jobs and reshape entire industries, these middle schoolers were already considering the long-term implications for their careers. Many of the professions they envisioned in the future may not even exist today, while others may vanish entirely, replaced by new roles shaped by AI and automation. This uncertainty about the job market of tomorrow adds to their intrigue and concern about how AI will continue to evolve.
As the conversation started to unfold, one of the students brought up the use of GenAI in home and classwork and the ethical implications of this technology. In fact, one of them said, “On one assignment, I got flagged with a 95% chance of using AI, but I did not use AI for that. I think my writing style might be a bit similar to an AI style,” Her classmate added, “The teachers use AI checkers (detection software) but it is the free version and we often see false results.”
This led one of the committee members to highlight the growing concerns over the reliability of any detection software used in schools by teachers to monitor the academic accuracy of their students’ skill levels. A generic writing style that appears more mechanical by utilizing common terminology appears to be flagged as AI more often. Such language is often used by non-native English speakers or neurodivergent students and hence they are more susceptible to being mis-identified as AI generated essays. According to the Bloomberg article, some students are spending an unbalanced amount of time defending the integrity of their work, which can be cumbersome to prove in some cases. Some educators are rethinking reliance on the detection tools already and are trying to foster a balance between maintaining a high academic standard and trusting the students.
The inherent bias AI tools may carry in them, because the AI itself was made by a human who may have had some conscious or unconscious bias in creating the tool, is one of the main film topics the 8th graders are focusing on. Above all, they were incredibly positive, and one kid exclaimed, “I am excited about the future.” But they are also concerned about the mechanisms to be able to control the AI. There was also a shared feeling that the job market they will find in a few years will be immensely different to the one we live in today and many professions will stop existing and new ones will be created, which has made them reconsider their career choices in the future.
The Human in the Loop: A Key to Trustworthy AI
At Hyperscience, the conversation turned to how AI tools are designed and deployed in real-world applications. As the panel members explained, AI is an incredibly powerful tool—but it has its limitations. Despite the remarkable progress in machine learning, AI alone cannot always understand the subtle complexities of human needs or interpret data in ways that align with real-world contexts. That’s why Hyperscience emphasizes the integration of a “Human in the Loop” (HITL) approach in its workflows.
By involving human experts in the decision-making process, AI outputs can be evaluated, adjusted, and refined to ensure that biases and errors are corrected. This hybrid approach helps maintain accuracy and fairness, particularly in applications where the consequences of AI mistakes can be significant. It’s a model that recognizes the importance of human oversight in mitigating the risks that AI technologies pose, especially in critical areas like education, healthcare, and finance.
A Positive Outlook on the Future
Despite the ethical challenges and concerns, these middle schoolers remained hopeful about the future. While they are asking tough questions, they’re also looking forward to the possibilities AI holds for solving some of humanity’s biggest problems, from climate change to medical advancements.
As these students continued to document their thoughts and interview experts, they weren’t just learning about AI—they were learning how to think critically about its potential to shape the world they will inherit. They are crafting a vision for a future where AI serves humanity’s best interests, and where ethical considerations are at the forefront of technological advancement.
In the end, the students’ journey to explore the ethical use of AI is more than just an academic exercise—it’s a reminder that the future of AI is not something to be feared, but something to be shaped with responsibility, care, and thoughtful inquiry. These 8th graders may just be the ones who lead the way.