
This discussion is based off the PBS Frontline documentary, “In the Age of AI.” To watch this documentary, visit this link:
PBS Frontline: In the Age of AI
Questions for Discussion
- When Kai-Fu Lee says, “Data is the new oil,” and “China is the Saudi Arabia of data,” what does he mean in those statements? Can you describe at least three ways that companies or governments with data monopolies could benefit from having those resources over other companies or governments?
- From your perspective as a user or non-user of social media or search indexing technology, do you think social media, searching or website (cookies) tracking of US citizens is important in the race to become a leader in artificial intelligence for the United States, or is that tracking/collection an invasion of privacy and individual rights in our country? What about in other countries?
- Can you think of a few careers you might consider pursuing when you graduate from college or graduate school? Once you have a few ideas, visit https://willrobotstakemyjob.com/ and plug in those career ideas to see their overall risk to being replaced by automation. How vulnerable are those potential careers to automation? Can you think of 2-3 careers that existed in 1950 that do not exist today? How about 1990? How about 2020? List all the jobs from those years/decades that have become redundant and replaced by automation.
- We live in a remarkable time where nation-states are vying to become the leaders of AI technologies. What challenges do you foresee your generation will have to face when it comes to nation-states look to becoming AI leaders and potentially use that technology against other nations? Who do you see as the key players in that race? In your opinion, how will that shape diplomatic economic relationships between the United States, the European Union, the Russian federation states and China?
- How will services like ChatGPT, QuillBot, etc. change the landscape of learning a fundamental skill like how to write essays? What about writing code or undercutting the need to learn computer science? What other essential tangents of learning could be replaced by AI? Of those tangents, is it a good thing that humans would/would not need to learn that skill? Why? Explain your answers.
AS A REMINDER, please cite the URL of whatever sources you use to answer these questions.
1. When Lee says, “Data is the new oil,” he is saying that data is our most valuable resource. Data powers deep learning and ai. It is so valuable because the more data you have, the more effective your algorithms can become. If you have a data monopoly you basically control the world because you can regulate what everyone has access to know and how good their algorithms are.
2. I think I am so unimportant in the grand scheme of things my data doesn’t really matter. I think the US could find more ethical ways to collect data but at least they for the most part aren’t giving us bugs and stuff. Some may think it is an invasion of privacy but when you go online you are consenting to giving your data so you can’t really be complaining.
3. not really sure what i wanna be but def not gonna be a sewing machine operator anymore. they’re at high risk. I probably want to be some sort of mechanical engineer that potentially works with aerodynamics. Some coding would be cool too. In 1950 if telephones even existed, they had operators. not anymore.
4. I think AI will become too integrated with politics. It should be seperate and a market industry not powered by the government. That creates too much power and can lead to more arguements. It will definetely not help international relations.
5. Kids will learn in elementary and maybe middle school. After that they will become reliant on ai but in order to actually succeed they would have to recognize how they have to work in collaboration with the ai. Ai does not have free thought like humans so you would need to give the ai your thoughts. A lot of tedious tasks will become less relevant and practical and learning them will naturally fade away with it. I think it is a good thing to have these skills but it will eventually not be necessary.
I agree with you about number 2 I am not that concerned about privacy. Do you think that the fact that basic skills won’t be necessary because of AI is a bad thing? I feel like one part of me thinks that if we don’t need them there’s no use to learn them but also I feel like AI will begin to take over the basic parts of human life.
I think that you’re point on #4 about AI being integrated in politics is very interesting and i’d like to know more about in what shapes and forms you think it would take place in. I do actually disagree with you a little bit about AI remaining in a private industry just because I feel it would be inevitable for governments to start using it to remain or gain power.
Your response on question 5 reminds me of this video I watched. An AI developer was sharing what he thought the 3 greatest dangers AI presented. The third danger—the one he said was the most likely and the one that frightened him most—is the possibility for AI to take over the world without us even realizing it. We will end up giving so much autonomy and so much of our descision making over to AI that eventually it will basically rule the world. So even if “Ai does not have free thought like humans so you would need to give the ai your thoughts,” it would still be a system in which the AI ultimately gives the human the answer, and the human by then is trained to just obey AI, leading to a world where AI essentially controls us even though we consider it just a tool.
I really like in answer 1 when you said “you can regulate what everyone has access to know” because thats a lot of what I thought about when watching the documentary and I think it really just encapsulates how scary the concept of regulated media can be. People will be fed strictly homogenous views to alter their thinking and be completely mindless to it.
I agree that AI will become too tangled with politics. I think it will be used in targeted campaigns, replacing the ads that you already get bombarded with before an election cycle. I wonder if kids will get access to AI even before highschool. It’s certainly possible that they’ll be reliant on AI before even getting through elementary school, especially considering how dependent on technology teachers are becoming.
1: He means that data is the new “substance” that will run the world, and China will be a huge producer of it, meaning that it will be able to single handedly manipulate the data market, just like Saudi Arabia does now. Governments with monopolies could also use their data to control their own population or launch highly targeted influence campaigns abroad, leveraging their deep knowledge of the views of their adversaries’ populations.
2: Ideally, we shouldn’t be tracked at all. But that isn’t a realistic outcome at this point, so I think that whatever data a company has on us should stay locked away in that company, not sold or even worse, given to the government.
3: I’m currently pursuing a career as an airline pilot. It’s listed with a 47% risk rate, but from personal experience I honestly would place that much lower. The FAA moves so incredibly slowly on EVERYTHING that I don’t see pilotless part 121 air carrier operations in the next 30 years. Part 135 charter and other commercial operations? Maybe, but the rate at which the government moves is simply too slow. The computer has been able to land the plane since the 70s (even on moving aircraft carriers) but pilots have not gone away. What has gone away is the flight engineer, and indeed it is possible that air carriers may move to single pilot operations, but one person will still be on the flight deck.
4: The biggest thing we’ll see, and indeed we already are seeing, is invisible influence campaigns over social media. With the amount of information our and other governments have on us, it’s only logical that they’ll use it to try and slowly change our beliefs without us knowing. This already happened with the widespread Russian interference in the last few election cycles, and it isn’t hard to find bots on anything Russia-Ukraine related. But the real danger is in the bots you don’t realize you’re reading from.
5: I just don’t think that the take-home essay, as a format overall, will exist anymore. There’s no reason for teachers to assign work that they know will just be written by an algorithm. Writing will always be a useful skill, so I think that teachers will likely move to in-class essay or other writing formats in the coming years. STEM topics are generally safer from AI, as no amount of chatgpt can help you on the test day or during a lab, but anything that has a focus on writing, especially analytical writing, will have to change its approach. A lot of writing that goes on everyday is honestly slop that ought to just be replaced by a computer, but AI cannot create anything truly “new,” so programming will remain an important skill that can only really be done by a human, at least when they are trying to innovate.
I think you make a good point that even though practically computers can do some things, we are still going to want humans to be there in case anything goes wrong. Some manual repetitive jobs are at risk but being able to think and create is what set humans apart and is why we need them in the work force.
Your point about pilots is interesting; I think automated pilots will not exist for quite awhile not just because of logistical reasons but because people don’t trust automated systems with their lives and, if any crashes were ever to occur, automation would be blamed a lot more than human pilots would be.
I really like your point about how STEM classroom’s won’t change in regards to AI due to the in-class testing, but I do think that as we grow more advanced as a society and require more specialization, I really do think that our STEM learning will become a little more directed and heavily involved in the AI world for quicker learning
I like your point on how countries, if they have enough data, can have “highly targeted influence campaigns abroad”. However, I not sure how these governments would be able to get the data of the peoples of another country country. Do you envision data leaks or cyber attacks or stuff of the sort. Even then, how much coudl this data actually help in influence campaigns?
Also i wonder why you said “not sold or even worse, given to the government.” What make you so against the government having your data?
For number 2 I would want to know what your opinion is on personalized feeds and if you think that they are positive how can we balance the information used for personalized feeds with information that can be “sold to the government” and what boundaries we should create and how we create them.
1. Kai-Fu Lee says that Data is the new oil to emphasize the rising importance and value of it. It is the most valuable resource for the current economical state. Companies with data monopolies would be able to gain competitive advantages and also influence many markets. It allows them to eliminate potential competitors by having access to customer opinions/prefrences on products and services. They can also see which products are starting to emerge as popular and use that to shape their markets.
2. Well, there are consent forms that you fill out that basically say they can track your data. I don’t mind it, it makes it so that my feed is personalized and I can find things that interest me quickly. I’m not sure what information about me is valuable enough that it can be used for something bad. It can feel creepy at times but I think there would also be complaints if there was no tracking and feeds were not personalized. In any sense, I think all apps should ask for full permission before gathering any information from any individual. If they don’t want their data shared, it shouldnt be shared. Also, I think companies should be limited with their usage of data and should be prohibited from sharing it outside of their organization (like to the government.)
3. I plan on pursuing mathematics and my career could be in jeopardy depending which path i take. If I take a more computer science based math career path then its not looking good. However, mathematicians themselves seem to be holding pretty strong (which makes sense, one time flint told me 2+3 was 4.) Certain careers that existed in 1950 that don’t exist today are milkmans (now people have fridges and online grocery shopping) or elevator operators (the system is much more advanced now and we don’t need someone controlling the speed of the elevator.) Jobs like film developers and video rental owners from the 1990’s have also died down with all the new platforms and ways to stream media. A career from 2020 that we don’t have now that sticks out to me is cashiers. There are more and more self-checkouts as well as certain stores that just need you to look into a camera and then walk out with your items purchased. At this rate, who knows what’s next.
4. Im honestly most worried about how government are going to try to attempt to subtly convince us/lure us towards a certain political ideology or just change our beliefs to align with the beliefs they want us to have. If they have our data, they can control what we see. In addition, if something like ChatGPT gains some sort of bias, young people who use GPT as their main source of education/information will be naive to the bias and be influenced easily.
5. Using AI to fix grammar and punctuation could help students, especially kids who struggle with learning differences or who are still learning English. It takes away some of the harder parts of writing so we can focus more on the creative side and our ideas. But there’s also a downside. If we rely on AI too much, we might stop developing our own writing skills. Writing isn’t just about putting words on paper, it’s also how we figure out what we think. If an AI does that part for us, we might miss out on learning to dig deeper into topics ourselves. This applies to computer programming as well. AI can handle a lot of the repetitive coding work, which lets programmers focus on bigger problems and creative ideas. But, if students don’t learn the basics, they might depend on just AI instead of really knowing how to code and maybe trust AI too much which can become dangerous and possible cause bigger problems later because of errors.
I really like your point on #4 where you’re talking about governments luring in humans to a certain political party through effective marketing based on what our data show. I think it’s a really interesting insight that will no doubt be implemented in the future, but do you think it will be as effective in a democracies? monarchies? dictators?
(it kept on saying I had a duplicated comment so I’m adding this ) I really enjoy your point on #4 where you’re talking about governments luring in humans to a certain political party through effective marketing based on what our data show. I think it’s a really interesting insight that will no doubt be implemented in the future, but do you think it will be as effective in a democracies? monarchies? dictators?
oops
I definitely agree with you about AI in teaching students how to code; I feel like when I’ve used AI to code it’s very easy to copy and paste code and get a working product for the simple problems that you work on to learn coding but as soon as there are issues in the code I really start to struggle debugging code that I am not familiar with. I feel like (or hope that) AI can never become fully adept at coding in a way that allows it to automate full, complex projects, so I feel like relying on AI too much can be dangerous.
I really like what you have to say about companies and the government using ai and our data to influence and control us, not necessarily even with our knowledge. Do you think there is an effective way to prevent this from happening?
Your point for question 5 is super relatable. Once upon a time I was trying to make a mathematical model using python, except I didn’t really understand python. Initially when I went to AI for help to make the model I indended to learn the basics of python hand in hand with actually developing the code with the model. However, I ended up relying heavily on the AI generated code rather than my own skills, leading to a much of code that i couldn’t really understand which part did what. Therefore, everytime I wanted to change something about the code i had to go back to AI for help on it. Eventually, it became so exhausting i just had the AI tell me what each line of the code did so i could edit it myself. In the end, I spent a ton of time having the AI teach me and tryign to get the AI to write the same code; furthermore I wasn’t even able to reach the proficency to add new nuiance to the code—I had to ask the AI to add it for me, then spend a hour figuring out what just happened.
I certainly think that the data they have on you is valuable. They know what you do and don’t like, how you consume content, when you consume it, really every single detail about how and when you use the internet. Imagine how easy it would be to change someone’s mind if you knew their exact opinion on everything. That’s what the danger of having nearly unlimited data on everyone is. I also agree that AI can prevent kids from learning the basics, because it’s actually pretty good at describing them. Where it struggles is the more complicated and specific topics, which kids will struggle to understand on their own if AI hijacked their ability to do the basics.
When Kai-Fu Lee says, “Data is the new oil,” and “China is the Saudi Arabia of data,” what does he mean in those statements? Can you describe at least three ways that companies or governments with data monopolies could benefit from having those resources over other companies or governments?
Just like control over oil gave countries a huge upper hand in international trade, data can be monopolized to control trade because companies with access to detailed data can maximize their profits by predicting all of the interests of their customers. Because China has less strict privacy regulations but a lot of technology, they are able to collect more data than other countries. Governments or companies with data monopolies would be able to sell data to foreign companies that need it and generate a ton of revenue, or withhold data and use it to boost the success of national companies so that foreign companies would suffer. Having a lot of data could also allow governments to create more strict security and police initiatives that would make their country safer.
From your perspective as a user or non-user of social media or search indexing technology, do you think social media, searching or website (cookies) tracking of US citizens is important in the race to become a leader in artificial intelligence for the United States, or is that tracking/collection an invasion of privacy and individual rights in our country? What about in other countries?
I have never been very concerned with the value of my data because I don’t think there’s very much that companies could do with my data that would be a threat to me. However, some of the extremely detailed data that is collected freaks me out a little so I am a little worried about the continual increase in the amount of data that companies collect.
Can you think of a few careers you might consider pursuing when you graduate from college or graduate school? Once you have a few ideas, visit https://willrobotstakemyjob.com/ and plug in those career ideas to see their overall risk to being replaced by automation. How vulnerable are those potential careers to automation? Can you think of 2-3 careers that existed in 1950 that do not exist today? How about 1990? How about 2020? List all the jobs from those years/decades that have become redundant and replaced by automation.
Unfortunately I am most interested in computer science so I’m super cooked. There are a lot of older jobs that don’t exist anymore today: telephone operator, calculator, secretaries (for the most part). Since 2020, there have been hints of jobs like cab drivers, grocery store clerks, and accountants being replaced but I don’t think anything has been fully replaced yet.
We live in a remarkable time where nation-states are vying to become the leaders of AI technologies. What challenges do you foresee your generation will have to face when it comes to nation-states look to becoming AI leaders and potentially use that technology against other nations? Who do you see as the key players in that race? In your opinion, how will that shape diplomatic economic relationships between the United States, the European Union, the Russian federation states and China?
I think the race to improve AI will lead to less and less safe collection and management of data to a point that will eventually infringe on our privacy. I think a big problem that nations will have to tackle is the job loss after AI replaces a large portion of the workforce, which could lead to some bad economic periods. I would imagine that the US and China will start a sort of Cold War-like race for improvement of AI.
How will services like ChatGPT, QuillBot, etc. change the landscape of learning a fundamental skill like how to write essays? What about writing code or undercutting the need to learn computer science? What other essential tangents of learning could be replaced by AI? Of those tangents, is it a good thing that humans would/would not need to learn that skill? Why? Explain your answers.
I feel like language models allow students to produce (relatively) high quality without any of the thought process behind it. The brainstorming and refinement of schoolwork is sort of the whole point and in my opinion, relying on ChatGPT for all of that is really detrimental to learning. I do think that if you use ChatGPT well, you can really improve how you learn and get individualized help on tasks. But in computer science, for example, I feel like kids learning CS can easily generate and copy/paste chunks of simple code that will work and get cool results quickly without understanding any of the code that they are running. While that works in the short term, I don’t think that will work for large and complicated projects so I worry about the level of understanding that people are building. For mundane tasks, I think it’s fine for that not to be a worry but for fundamental concepts I think it’s concerning that students are able to get by without understanding.
I agree with your last point about how learning to think is the fundamental aspect of school, and the lack of creativity that AI allows for us to undergo is detrimental to that sort of learning. I was wondering if you think there will be restrictive AI technologies in class (sort of like flint) that will help find that balance between answering vs helping students comprehend concepts.
I think it is really important to talk about what you send in prompt number 5. Even with ai bans and restrictions, a lot of kids still use it to do their work. How do we create a learning environment where kids aren’t just banned from using ai but understand that it is detrimental to their learning and comprehension and then are self-motivated to not use it?
On your point in question 5, how do you think this rapid growth in use of ChatGPT in schools will affect/look like in the long run? Do you think schools will change? Will classes be “how to use ai to do this” rather than “how to do this”? If so, what are things that can be done to prevent this?
On q four, it was interesting how you pointed out that AI was the potential to replace many jobs and push many out of the workforce, potentially leadign to economic decline and possibly increase in poverty. It seems very weird to me that te government would so recklessly pursue AI innovation adn improvement if this could be a cost of it. Do you think the government simply doesn’t care, or will it be possible for them to pursue slower, safer strategies that maybe won’t have such consequences.
I agree that AI can have both good and bad effects on learning. A good solution might be to formalize AI use classes to help kids get an understanding of how to best use AI. Without it, it’s likely that students will use AI as a crutch from a very young age, and once they get to content that AI isn’t great at, they’ll struggle.
1.) When Kai-Fu Lee says “Data is the new oil,” and “China is the Saudi Arabia of data,” I think he is talking about the monetary value data holds in our new AI world, but also the fact that China has been able to get a lot of both quantitative and qualitative data on all of their citizens and people around the world. Similarly, data is like the oil in the way that it fuels AI models to work. I think one way a company or government could benefit from having a large dataset over another is simply blackmail. Even though it is illegal, it can go unchecked and having access to such a large dataset could give access to highly classified data which companies could use for their personal benefit. In addition, I think companies with data monopolies have a better advantage of making efficient systems for production because they’ll understand what works and what doesn’t. Finally, I think that companies with data monopolies are able to stay at the top of their respective business, and with the appropriate data stay at the top while pushing startups down before they can fully grow.
2.) I think that it isn’t an invasion of privacy because I believe progress comes with its consequences and this is one of the things I think is necessary for us to grow as a society. I think that the tracking of US citizens is important because if they don’t do it, then I guarantee the next country in line will extract everything possible.
3.) A lot of the financial careers (mainly data analysis) I was interested in pursuing in college are at pretty high risk averaging around a little over 50%. Since the 1950’s, assembly lines don’t exist anymore (and all the workers that were a part of those) while pinsetters and elevator workers don’t either. Since 1990, I’m pretty sure travel agents have declined a lot, and since 2020 I’m not completely sure. These jobs were replaced by automation : assembly line workers, data entry workers, customer service, cashiering, and telemarketing.
4.) I think my generation will have to face a lot of issues in regards to privacy and at what point it becomes too excessive. Progress comes with its consequences, so staying at the top with AI would directly require an invasion of privacy for citizens in every country. I also think that our generation will face a lot of issues in regards to AI weaponization whether in developing complex forms for biowarfare or machinery.
5.) I think that AI services will struggle to change the landscape for learning humanities since it’s opinion based and forms of writing are constantly being changed. I think STEM AI services will take over a lot of control due to there being definitive answers that AI can reach both correctly and efficiently. I wouldn’t be surprised if the classroom learning style will change over the course of the next decades with more forms of automated learning/chips in our brain to increase our knowledge.
It’s interesting to think about automated learning, which definitely does seem possible with some of the neural implant technology that has been developed. It’s a very weird concept and I don’t know that I really approve of thinking being automated but I feel like technology usually progresses anyways, so I hope I die before all that happens.
I think your point in 4 about AI weaponization was really powerful and something I never considered before. If placed into the hands of the wrong person, who knows what can be done. Who knows how powerful AI can become
I thought your q 2 was an interestign idea: its the lesser of two evils. Ideally privacy is pretty cool, but when we are faced with legitimate threats and other nation’s lack of moral give them an advantage, we have to reconsider our own standards and whether they are what we need them to be, not just what we want them to be. However, if we don’t draw the line somewhere, privacy risks will become the greater evil as our own nation might become more oppressive than a rival nation. Figuring out the tightrope to walk will be tricky.
I think your point in q 1 is interesting. The real problem is most companies work in their self interest and deprioritize what may be ethical. I do not expect this to change because these companies want to stay afloat. How do you think we can regulate these big businesses with access to and control over our data without creating an economic crisis?
I agree that financial analysts will probably be gone. Their entire jobs revolve around analyzing large datasets, which is quite literally what AI does best. AI weaponization will also be a huge issue for all of us, and unfortunately I don’t think there’s very much we can do about it. It’s likely that we’ve all already seen a weaponized, AI generated social media post specifically designed to make us think in a specific way.
1. Kai-Fu Lee is comparing the importance and use of oil to data. Oil is vital for industrial development and modernization, and data is the most important aspect when it comes to the development of AI. Furthermore, just as oil is an important international commodity and gives countries global influence, the same thing is happening for data. And just as Saudi Arabia is a leading nation in oil production because they are oil rich in their ground, China is leading the globe with its data collecting because they are data rich. How data monopolies can benefit companies in competition with other companies: 1) More targeted ads based on user activity that encourage more purchases on their items/sites than those of their rivals, 2) Create innovative solutions to problems, thus outpacing competition 3) Improving biometric identification for citizens.
2. I’m not really sure how tracking my social media and harvesting data from my activity on it would help the US become a leader in AI, because I don’t believe creating a strong AI based on my social media would have much global relevance. However, if it does, then I would agree it’s important for companies to have access to data, so long as it is kept in an anonymous data bank. I don’t think it is an invasion of privacy, because to a certain point we must acknowledge that we are operating outside our domains when we go to the internet and leave digital traces, if that makes sense. However secure items like identification, emails, and such shouldn’t be touched. Whether or not it is a violation in other countries depends. For example, in China it certainly isn’t(at least as defined by the government).
3. Politicians had a non-existent threat(weren’t even an option to select, which makes sense), lawyers, mathematicians, and electrical engineers have a low risk of being replaced. 1950: Phone operator, milkmen, gas station person. 1990: video rental store, typewriter mechanic. 2020: I can’t think of any.
4. I foresee 2 major challenges that I might face in the future. Firstly, the government will increasingly decrease regulation on privacy rules, letting both themselves and companies engage in more data mining in order to improve their AI capabilities. I also envision that rival countries will be less likely to share critical data, preferring to hinder the other rather than solve global crises. There is also a chance that AI will assist nations in cyber attacks, requiring us to develop stronger cyber security. The main rivals will probably be the US and China, with the EU playing a part as well. India also has a chance to be a notable figure because of their rising economic and technological growth. It definitely will shape relations between the 4 parties, but I don’t think that they will change too radically from what they are right now. It will most likely just intensify any currently strained relationships or re-enforce allianceships.
5. Despite our attempts to have standardized learning, fundamental skills like essay writing are unique to each individual. Who that individual is and how they develop their skills will leave them with a unique style, sorta like how each person writes in a different way. Having AI teach/show students how to write essays, will lead students to lose individualistic styles and adapt the bland, generic style of AI. Even for something like code. AI will teach students the most standard methods of solving solutions. This lessens the creativity of the programmers to develop new solutions and code blocks. Although something like AlphaGo can happen, where AI actually teaches coders new, innovative solutions, the same problem exists. Coders themselves lose the ability to imagine new solutions. I think key items related to learning that could be replaced by AI is the skill to break down and read complex literature, translate languages, and do research for papers/studies. Although this kind of work seems like busy work and something that just takes up time, the soft skills learned from having to perform these processes are critical to the development of people that can be beneficial in society.
https://www.indeed.com/career-advice/career-development/jobs-that-don't-exist-anymore
I agree that AI will lessen creativity and can cause a loss in individuality. Are there any of those “busy work” tasks that you think are acceptable to not be human-based anymore?
Your point in 1 about global influence is very interesting because I had answered my questions with a kind of “within country” mindet but it is totally feasible that data from other countries can be used to benefit a certain country and make them gain more power.
Your point in #4 is really interesting. Countries will prioritize having a leg ahead of the others instead of sharing data that could help everyone. We can already see this happening with countries keeping a lot of important information to themselves. Developments have become integrated with politics and anything that has been created in the US, the US already wants to use it as leverage. Is there any way we can really stop this from happening and help the common good?
I wonder what reaction there will be to the decreased regulations on data collection. Will there be mass protests or will people just not care? Will it even be publicized or will it go under the radar like most tech legislation? I personally think that people are going to start caring a lot more and more about what’s being done with their data. Hopefully the opposite of what you predicted happens, and the US implements EU style user data protection laws. In the EU, anyone can request all the data any company has on them and said company is legally required to comply. I hope the same happens here.
I really like your idea of developing your own literary style, and how AI involvement in the essay-writing aspect of school would severely hinder that. I think the creative thought that goes behind each crafted piece of writing is more important as a learner than what the final product actually is, and therefore I couldn’t agree with you more on that.