the case against ai
        
there has been a flood of different ai vendors offering different solutions to 
different problems. the most disruptive of them being chatgpt for the general
populus, and copilot (and friends) for computer scientists. they lure people in
with a promise of getting work off your shoulders, and sometimes even suceeding
(i did have a couple of friends that passed bash scripting assigments given by
a VERY pedantic lecturer, with older free chatgpt only). however, i want to  
argue against using llms (if you care about the quality of your work), and that
relying on such tools is in the best case unwise, regardless of their level of 
complexity - because they're *fundamentally* unfit for those that really care.



i think it's rather universally agreed on that the brain is a muscle. as
intellectual workers, artists, engineers, designers, anyone that works primarily
with their brain (though i've heard someone say that programmers mainly work
with their eyes, and arguably some do...) - it's desireable to train your brain.
it's straight up benficial. if your brain works faster, more efficient, more
creative, this has a direct effect on your work results. there's of course
flavors to this - a writer doesn't need to transpile c to x86 assembly and back
in their head, and a reverse engineer won't really benefit from expanding their
vocabulary when it comes to malware research. but the common denominator is that
some flavor of brain "strength" is going to benefit you and make your life
easier in the long run, just as moving heavy boxes in a warehouse all day is
going to be easier if you develop good stamina.

besides plain old health, exercising your brain gives you another benefit -
you get new knowledge, ideas, thoughts. to me, the brain is something incredible
for one big reason - association. if you think about it, it's really all it 
does. a face is coupled with the idea of a person, a word is coupled to some
abstract concept, smells are associated with memories. walking happens because
we associated a series of nerve signals with the outcome of moving your leg in
a certain fashion that propels you forward and can be repeated until you are
at your destination (even with respect to variables like if youre wearing heels
or not!). moreover, what is really unique, is that we can make big leaps of 
connections across many domains, and it happens often - for example:

- artists: making a connection and translating a concept related, for
  example, with nature to a physical medium - vivaldi's four seasons
- engineers: getting inspiration from a completely unrelated domain and
  applying its concepts onto an engineering problem - see genetic
  algorithms for example

of course, in the day to day, you are not going to make such drastic leaps. but
something smaller happens all the time - for CS, a category theory result gets
used to improve a programming language, or a new probabilistic algorithm is
found and proven faster than others in a domain previously dominated by very
correct and deterministic approaches. someone had both pieces of required
information ingrained in their brain, enough to make a leap, the leap happened,
and as a result they came up with a new and creative, unconventional solution
to a problem.

yet another nice property of the human brain is extrapolation. if i asked you
to find the value of f(10), if i told you that:

                                   f(1) = 1,
                                   f(2) = 2,
                                   f(3) = 4,
                                   f(4) = 8

you could make an educated guess that f(10) is 512. this applies to much more
abstract settings too - when you know how to draw a cat, and someone wants you
to draw a tiger, you can leverage your knowledge of drawing a cat (and well,
the rest of the information you have about the world) and make a very good
attempt at drawing the tiger, even if you haven't been explicitly told how
to do it.

the combination of association and extrapolation makes the brain a very
powerful tool for basically anything, provided you supply it with lots of
information and experiences. but how do you really put stuff in your brain so
that it can be that effective?

simple, you learn. however you do that - by reading, writing code, practicing
mental transpilation, debating with others, whatever. there's a saying in the
fitness community that the best workout is the one you will do - and this
applies to the intellectual domain as well. the point is making an effort to
develop yourself. that's when you unlock the arcane in your brain.

great rant, but how does that even relate to ai?

situation: you are tasked with implementing an algorithm. this is the first time
you've heard of it, and you're not really sure what the benefit of using it even
is. 

the slow way to resolve this is to start researching - you read papers, books on
the topic, you see implementations in other langauges, you watch a couple videos
on the problem classes this solves. then, after making an effort of
understanding it, you attempt the implementation, resolving errors along the
way, and after painstakingly going through all of this process you now have a
nice implementation backed by good understanding of the problem at hand.
your reward is the knowledge you gained along the way - the next time someone
asks you to write a similar algo, even if it has some bigger differences, you
can easily leverage your knowledge of the first one to solve the new problem
faster - and the solution will be cleaner, more efficient, more expressive as
well because you *learned* good practices along the way with the first attempt.

the lazy way to do it is to just type "write me a fourier transform in Rust"
into chatgpt and copy and paste the result into an ide.

is the lazy way faster? of course, by a long shot - *for this instance*. what
most people miss about using llms for basically anything that requires your
mental effort is, well, it robs you of the mental effort. you're basically
outsourcing your work to someone else.

i can't stress enough the "rob you" aspect of this. you basically had an
opportunity to train your brain a bit, and unless you go back, scrap the llm
code, and start from scratch, the opportunity is lost to you. people who argue
that "the outcome is the same", to me, look through a small lens - your program
does fourier transform either way, but with using an ai to do it for you, you're
none the wiser. when someone then asks you to implement a custom variant of the
fft for some weird esoteric data structure thats convenient for some use case,
it might come to a point where the llm will not be able to do it correctly.
it can only draw a cat. your brain is made to draw tigers.

of course, you can make it a slippery slope argument and say "then using any
tool robs you of the effort" - which is not entirely applicable in this context.
if you want to get better at something, you need to exert effort. but this
effort needs to be concentrated on what you actually want to get better at. if
someone cares about programming, sewing their own clothes will not really make
programming easier for them (unless they can make a leap from needlework to
computer science, which might happen!). buying pre-made clothes of course robs
you of the effort of making those, but this is not how you want to develop in
the first place. it goes both ways - learning how to sew is very beneficial
if you want to design your own fashion collection, but implementing a linked
list won't really help you (unless you can make a leap from computer science to
needlework, which might happen!).

extending this thought more locally, it's ok to for example use some pre-made 
templates for your website if you don't care about webdev at all (though it's 
something nice to know anyway, and your own website is also an expression of you
so it's another thing). you are not robbing yourself of the effort, because you
wouldn't care to exert it anyway. there's an abundance of amazing computer
scientists that have *horrendous* looking websites (torvalds, rms, lattner) or
websites that are just some drop-in template (hotz's blog), it doesn't matter
if it's generated by an llm, because the mental effort is spent elsewhere for 
those individuals. however, me personally, i'd exert it anyway because of the 
making leaps argument i made earlier. i think no knowledge is ever useless, so 
the mental effort is always worth it, even if not optimal towards reaching my 
goals. (learning can just also be fun).



another, less philosophical argument against auto-generated code (without proofs
of correctness, that is), is that it's violating this rule:

                    a computer can never be held accountable
           therefore a computer must never make a management decision

           (ibm slide from 1979)

it's not hard to extrapolate this to *any* decision - decisions about a car
brake driver aren't really management decisions, but an error in a car brake
driver can create a lot of problems. who is then responsible for that error?
if the code is written by a human, the consequences are mostly going to trickle
down to those that were tasked with its development (this rarely is a single
person's fault though). but what if it was written by an ai system? are the
developers of that network responsible? or is it the company that decided to
use it for development? or is it the guy that clicked "enter" to make it launch?
if you're in any part responsible for your code, it's much more robust to run it
through people's minds first, and more importantly of course by consciously
testing it/proving it and making maintainable, sensible changes. even if llm
code is correct, code is written for humans as well and ai generated code might
not infer little (or bigger) style details that make the codebase maintainable.
which is something directly contributing to software quality. so it would be
unwise to use such a tool for software that you have to maintain.



the last important thought about using ai for work like this is about the
creative aspect. llms, or gen-ai in general has this "woo" factor because it can
do things that *look* like creative processes. but they're fundamentally not.
creativity is rooted in self expression - given infinite ways to reach some
goal, we take one that aligns with who we are, what our experiences are, our
taste, style, etc. back to the draw a tiger example - the goal is, well, to draw
a tiger. but theres multiple ways to do it, and all are made of little arbitrary
decisions about the drawing - what's the tiger doing? is the drawing monochrome
or full of vibrant color? is it hyper-realistic, or is it stylized? in what way
is it stylized? all of these things will make your drawing *yours*.

(as far as we know, at least) ai doesn't have a self - it cannot possibly 
self-express. for things that are purely creative, eg. art, ai is basically 
useless. ai poetry for example is about the most pointless thing to come out of 
this - the entire point is to *express* your experiences with the world and wrap
it in beautiful worlds. there's no experience behind generating something that 
looks like a poem as close as possible. it's a party trick at best. but an 
actual artist will never see the appeal of such a tool, becuase it's 
fundamentally at conflict with the notion of self-expression.

(though generating parts of a work to expand upon with your own experiences, i'd
say could be considered art - the self expression is now there - this is a
possible use of ai in art)

extending this to programming - this is going to be subjective - but programming
is also a form of art. it's a little more nuanced because it often needs to
solve some rigid engineering problem; but there are always multiple ways to do
it. this is where you can self express. stuff about making leaps - this is also
self expression in a way, which is very applicable to programming. and if we are
creating any side projects for ourselves out of pure curiosity, without the
constraint of a some shareholder, to me this is very much a creative process - 
we can do anything, and we choose something that aligns with us. this extends
to every part of the process - choosing the algorithms to solve this, picking a
programming language or style, setting the goals for a project, down to the
little decisions about implementation.

ai robs you of that self expression; the scale of the robbery is of course
different, depending on how much you use it. if you just generated some little
helper macro out of laziness, but did everything else in the project, it's
very much mostly yours. but if you generated an entire website, and just changed
the text around, you can't really claim its you who did it, and there's nothing
in that code that makes it "you".


given all of these points, i won't really be using gen-ai for stuff i make. for
me, theres literally no point - i want my creations to express *me*, i really 
enjoy the process of learning, and i already reap the benefits of the mental 
effort compared to my peers. i don't want to miss out on all the fun and
experience i can gain by doing things without outsourcing them to a black box
model (or anything/anyone else for that matter)


a