Hello! My name's Nick Macias, thanks for checking out my site š
Perhaps you saw a link in an email, assignment, or other document stating that no AI was used. If you saw this on something I wrote, this site may explain some of my reasoning in adding that note.
If you got here from something written by someone else, they have their own reasons for what they say, and of course I don't claim to speak for them.
If you'd like to mark your own work as AI-free, and want to use this site as a reference, you're welcome to use the logo and a sample note as you like.
If you'd like to add a "Real I" signature/etc. to your work, you can find a sample signature here that you can copy and paste.
If you'd like to do your own annotation, you're welcome to use this logo (right-click to save it). You can find other versions of it here, as well as information about how it was made (using 3 images from openclipart.org).
Iāve made a choice to mostly avoid using chatbots like ChatGPT and Gemini, especially for anything where Iām trying to produce creative works of my own. This includes emails, webpage content, assignments/tests for students, stories, music, pretty much anything I try to create. Iāve also chosen to be more mindful in my use of other tools. Some, such as auto-correct, auto-complete, grammar checking, and so on, Iāve already been avoiding, partly because they interrupt my workflow. Other tools, such as search engines, Iām trying to notice when and how I use it, and simply decide case-by-case whether I really want to use it, or if Iām using it out of habit. You can find more details here.
There are already well-documented arguments about the potential downsides/dangers of AI. Use (mis-use?) of chatbots raise potential issues around privacy, intellectual-property, built-in bias, and, of course, the simple fact that sometimes these systems give answers that sound convincing but are just plain wrong. There are growing dangers from deepfakes, and from bot farms that feed misinformation to large numbers of people. There are also easily-imagined dangers down the road: an AI system that controls lethal-force weapons, with authority to act on its own, is certainly a scary proposition. And, of course, further down the road are even more grave possibilities: think Blade Runner, Terminator, or the Borg.
While I share these concerns, I feel like there are additional concerns about AI that are less frequently discussed. For example, using a chatbot to write an essay for a homework assignment is not only dishonest (unless the class specifically allows this), but prevents one from honing their writing skills. Using grammar-correcting software may erode ones ability to write well-formed sentences on their own. Habitually using auto-correct may eventually degrade ones ability to spell.
More significantly, I worry that as people write less and less, we may be, as a species, impairing our ability to think. When one writes a grant proposal using a chatbot, itās difficult to see where innovation will come from; and when the sponsor is reviewing applications using a chatbot, itās difficult to see how innovative, truly revolutionary ideas will be recognized by such a system. This is a prescription for system-wide mediocrity. Over time, without new, original input to the system, things may just grow more and more stale. This is not what the world needs right now!
You can find more details here.
As much (or as little) as one chooses! Like any tool, AI has its applications, and if you have a good use for it, or if youāre simply curious, or if you just like messing with it, thatās great! I used to play with ChatGPT, just to see how much I could mess it up. Eventually, that stopped being fun. Sometimes Iād use it just to see what it could tell me. Then one day I looked up some of the research I had done, and it told me many things, few of which were true. It attributed my work to different random people, and mis-explained fundamental concepts of my research in ways that had nothing to do with reality. So that stopped being fun too. I choose not use use chatbots anymore, but thatās just my own choice.
Absolutely not! Aside from not knowing anyone elseās situation and circumstances, I have no authority to judge anyone. My main purpose is, when I convey a creative work to someone, to let them know what tools I have or have not used.
Somewhat, yes. I remember arguments about people forgetting how to do math by hand, and counter-arguments that maybe people donāt need to know how to do math by hand. I remember arguments that calculators would free students from the drudgery of manual calculations, and allow them to explore the deeper, more interesting aspects of mathematics.
I loved every calculator I had as a kid, from our first add/sub/multiply/divide whole-number only calculator, to the more-advanced scientific ones. And I did play with them in a way that exposed more of mathematics to me. I loved using the Constant feature to repeatedly multiply by a fixed number, cranking out powers of 2 until it overflowed. But I found that interesting because I was a geek! My friends who were also geeks enjoyed these things too. My friends who were more into sports had no greater or less interest in the Constant feature than in doing calculations by hand.
On the other hand, I believe that learning how to multiply and divide by hand helps one understand the concepts of multiplication and division. The value of doing those calculations is thus not in the ability to do them (because, hey, thatās what calculators are for!); itās in the understanding, the conceptualization that occurs while learning how to do those calculations by hand.
In my earlier coding classes, we donāt use IDEs: we work on the command line; edit with vi (preferably without syntax highlighting); compile by saying āgccā¦ā; look up information using āman ā¦ā; debug with gdb. Why? Because doing these things oneself helps one to learn. Without syntax highlighting, you develop skills around looking at code, identifying different sections, breaking code into blocks in your head. These are useful skills for designing, developing and debugging code. Indenting code yourself helps you remember, in an active way, where you are in a series of nested blocks. With auto-indenting, most often I see students staring at their code, wondering why the indenting is off, and randomly adding closing brackets (often in the wrong place) until the auto-check indicator goes off. Indenting has thus become yet another āpointlessā task, a burden to be dealt with, rather that a useful aid in help one organize their code.
To me, spelling, grammar and writing are similar. Learning to do these things, and practicing them regularly, feel important for keeping those skills honed, and if those skills become dull, I feel like ones ability to formulate thoughts, arguments, ones ability to reason can suffer.
I think this is an interesting question. I know when I make art ā lately mainly music ā itās a cathartic process. Iām not making it for others, but for myself. I am working through something inside, and writing music helps me find what it is, explore it, and understand it better. With good art (not saying my music fits this category!), the experience of the observer will be their own. It may not be the same as the artist, but it may unlock things in the observer in a similar way to how it unlocked them in the artist (though the specific experiences may be completely different!) This is part of the magic/beauty/miracle of art; and I believe this defies algorithmic mimicry. While someday artificial systems may achieve sentience, and may be able to draw on their own experiences, emotions and conflicts to produce art that resonates with others, I donāt believe this will be achieved by simply re-mixing other peopleās work using probabilistic algorithms.
Arguably, some of the greatest art has come from those who somehow break out from the system of their time. When everyone is painting the same way, and someone suddenly starts painting in a completely new way, it may be the start of an artistic revolution. If all art is created by re-mixing existing art using the same algorithms, from where will the next revolution come?
This is where I start talking about a darker vision of the future. Are you sure you want to go there?
Iām concerned for the future. What Iāve seen over the past 10-20 years feels like a dangerous trajectory. Social media had the promise of connecting people, bring together people who might not otherwise encounter one another. Instead ā in part because of the monetization aspect ā it has brought together similar people, while further distancing dissimilar people from each other. I feel like itās amplified the āus vs. themā view that humans seem to tend to. So society begins to fragment.
Next, we entered the era of āalternative facts.ā Whatever you believe, there are probably people who believe similarly, and others who believe differently. And those groups tend to stick together. As weāve become more comfortable with social media as a source of information, itās become a way to also confirm information, and, of course, this means our beliefs are echoed back to us, further reinforcing them, whether they are true or not.
And now we have AI, which is well known to produce statements that are misleading, or sometimes simply wrong. But we are very comfortable with wrong information, as long as it agrees with what we already believe. This is my biggest concern about AI. As it is used to produce news stories, video footage and audio recordings, our ability to discern fact from fiction will continue to erode. As it is further entrusted to make decisions about who gets what medical treatment, whether insurance pays for your care, whether you ran a red light, committed a crime, should receive a trial or simply be shot, our very humanity is threatened. This is not about robots taking over the planet and wiping out humanity; it is about taking the worst parts of humanity itself, amplifying them and allowing them to run unchecked, until our society becomes so fragmented that our species can no longer survive.
I do see one possible positive outcome. For years, Iāve seen different institutions rely on similar methods for assessing individuals. For example:
⢠for college admission, students write personal essays;
⢠for a job application, candidates write cover letters;
⢠for seeking funding for research, applicants write grant proposals.
These written artifacts are used to assess oneās suitability/worthiness for going to school, interviewing for a job or receiving funding for their work.
There have been plenty of stories about ChatGPT passing college entrance exams, or writing essays that impress a review board. Walking past 2 people at a concert last year, I heard one bragging about have just received a $30,000 grant and how they did the entire application with ChatGPT, without having to write anything themselves. As a teacher, I see students submit work that has content far beyond the course material, but that matches exactly what I get when I ask ChatGPT.
The message to me is this: if an algorithm can cut-and-paste found material into something that is interpreted by us as brilliant, maybe we need to re-characterize our idea of brilliance. I doubt anyone believes that ChatGPT really wants to go to college in hopes of becoming a better person and contributing to society; yet it can produce an essay that reviewers, upon reading, may interpret that way.
So when we ask someone to submit something in writing, are we really be assessing their merit, their commitment, their insight; or are we assessing their mastery of a process that can be described with an algorithm?
My sense is that itās a mix of these, but with too much of the latter; and that perhaps, as people become increasingly aware of how impressive ChatGPT can look, our notion of what is impressive will become more refined. Perhaps weāll look beyond a well-assembled collection of facts, and learn to better look for what is behind the words: the depth, the intention the passion. Thatās one of my hopes for the future.
The dream of AI is not new. When the Greeks studied propositional logic, it was probably to try to understand how humans think. While they may not have dreamed of building a thinking machine, the urge to understand something is often the beginning of a journey towards trying to re-create it. Certainly when Alan Turing and others were conceiving artificial calculating machines, there was discussion of artificial intelligence. In the 1950s, the goal of automatically translating one language into another felt like it was right on the horizon. In my career, Iāve seen multiple periods where we heard that āA great era of AI is just a few years away.ā
When ChatGPT appeared, I think most people were caught off guard. And it is definitely a huge breakthrough. The things it can do are amazing. But is it AI? There are many things companies call "AI" in hopes of appearing to be on the cutting edge. Can you tell what things are really AI?
People make distinctions between AI, ML, GAI, GPT, and so on. Certainly ChatGPT can pass the Turing Test (a test conceived of by Alan Turing, asking if a machine could fool someone into thinking it was a person). And given seemingly abstract tasks, it often comes up with convincing arguments that look like reasoning is taking place. But with careful prompting, it sometimes falls flat.
As a teacher, I need to come up with test questions that test understanding. If I simply ask students to repeat things theyāve heard in class, thatās not really testing understanding. A good question requires them to have understood some concept, and then be able to apply that concept in a new way to a different question that theyāve never seen before. When I ask questions like that to ChatGPT, its answers show whatās really going on: itās basically matching keywords in examples, picking pieces from those examples, and putting them together in ways that match what most other examples have done. With this idea, even simple questions can lead to ridiculous results.
This is an increasingly difficult question, for 3 reasons:
1. Chatbots are getting better at creating well-written text;
2. Humans may be getting worse at it; and
3. Humans are getting used to noise, poor writing, inaccuracies, and outright falsehoods.
I recently received a letter regarding a refund for hotel accommodations due to a missed airline connection. The letter began by saying how, regrettably, the missed connection was due to weather, which is not covered, and thus I am not eligible for a refund.
The next paragraph began āMoreover, Iām happy to inform you that you are eligible for a refund.ā
Most likely, the text was being generated by a system, running through a set of factors from my case, and adding appropriate text for each one. The first factor it encountered was probably the weather factor, which triggered the āno refundā text. Then maybe it saw the length of the delay (2+ days) which triggered the second paragraph. So pretty easy to tell that this was computer-generated.
News stories are another area where Iām starting to see things that just donāt make sense. There are some obvious cases, like the one where a basketball play who missed 10 throws (referred to as ābrickingā) became the subject of AI-generated news stories that he was vandalizing homes by throwing bricks at them. But itās getting harder to tell. Just search for āfake ai generated news storiesā for an idea of the scope of the problem.
Thereās another aspect to this though: it seems like people are realizing less and less that this is even a problem. I recently read a news story about fake news headlines, and the problem with AI generating fake headlines. At the top of the story...was a box labeled āGenerate Key Takeawaysā which was offering to summarize that news story for me! Iām guessing that was added automatically, which means editorial control over the content of that page has been turned over to a machine.
This is just a pet peeve that I need to mention. The term āhallucinationā gets used a lot lately. It refers to when an AI generates an incorrect result. To me, this is pure marketing. Before AI, āhallucinationā was a term for when a thinking, sentient being would perceive things that were not actually present, or would perceive them in a way that was different from how they actually were. Itās a dysfunction of a complex system that normally allows people to perceive reality accurately, but in some cases causes these mis-perceptions. When ChatGPT tells me something thatās wrong, itās not a hallucination: itās a mistake. If I question it, it will usually change its answer, often to something else that is wrong. āHallucinationā makes it sound like there was a careful, thoughtful process that would have gotten the right answer, but due to some extenuating circumstance, it got the answer wrong this time. Thatās a marketing pitch! The simple fact is, it got the answer wrong. So do fortune cookies. If your fortune cookie says āYou will get a lot of money todayā and at the end of the day it didnāt happen, you donāt say āHmmm, my fortune cookie must have hallucinated.ā