The Social Files #7: The World Is Our Model
reality vs artificial intelligence + misc thoughts on summer and adult breaks

My friend and I were catching up last week when she asked, "How was your summer?"
"Was?" I responded incredulously. "Since when did summer end?"
Then I realized that it was 8pm and the sun was already beginning to set. The sound of kids playing outside teetered between that of swaying trees beckoning a light fall breeze. Indeed, Mother Nature, despite her latest unpredictability, seemed ready to turn the seasonal switch.
This summer has felt more languorous than most. Languorous — not in an oppressive or stilted way, like it did during the pandemic — just slow and leisurely, intentionally, by design. Inflationary headlines spilling news of impending recession, war, increased gun violence, and extreme weather are unavoidable. But for better or worse, I've grown slightly numb to everything being deemed a crisis. Even if the world is going to shit, the only thing I have any control over is finding some semblance of stillness at the center of all these shitstorms.
I graduated right before Memorial Day Weekend and have managed to whittle away over 3 months of post-graduation, pre-employment bliss. Job applications and a small case study aside, I was pretty useless. I did not jet-set anywhere new, start any projects, or establish any healthy routines. I did reaffirm fragments of life that had been tossed away during grad school. Unrequired reading. Long runs without GPS. City meandering in DC, NYC, and SF. Daydreaming in the park, looking up and marveling at how fast clouds can move. Morning coffee & wordle with my parents, time with my nephews & nieces, regular journaling, haphazard stretching. It was simple — and exactly what I needed.

More than anything, the summer felt like a culmination of the circuitous path I started some 4 years ago when I got very lost and unsure about what I wanted to do with my life. It was like a deep tissue massage, one where all those stubborn emotional knots are ironed out and you emerge feeling like your spine is a bit straighter and you can walk with more conviction. Amazing what some self-loving does! It's taken nearly all of my adult life to arrive at this place of acceptance, of realizing that I'm running no one's race but my own. While I know that the demands of reality will continue to poke at old wounds, I feel, for the first time in years, properly rested and ready for whatever awaits. (Shout-outs to therapy and friends & family — they do wonders.)
~~~
Even though my self-imposed period of "doing nothing" effectively prevented burnout, kept me sane, and enriched my relationship with myself and others, society often casts judgment on lack of productivity and boredom in general. Boredom is anathema to the cult of productivity, and to be caught in one’s own unproductive company is equivalent to secular sin.
I thought about this recently when I picked up a novel and found myself antsy after 5 minutes when the first chapter wasn't going fast enough. Didn't I have other things to do than read a trashy YA book? No, I actually didn't. Yet my mind continuously flitted to a myriad of other more "useful" things I could probably do and suddenly, I felt a sense of anxious hurry. Naturally, to cope, I picked up my phone to check if I had any urgent messages waiting for me. Nothing. You can, in fact, read your trashy YA book, Lynne.
~~~
Several months ago, for a fellowship with the Responsible AI Institute, my advisor Professor Steven Kelts and I developed a set of scenarios to assess the potential harms of AI when deployed in the defense, medical, and financial credit spaces. There are lots of doomsday scenarios spun about the unintended consequences of AI; ours were probably no less fantastical. But the exercise was useful for me in helping to make concrete some of tech's larger implications, less so in their immediate effect (i.e. added convenience and speed), but in subtly insidious ways that tech is not explicitly designed for. These second and third-order effects manifest in the form of new behavior, values, and thought — like the impatience I had with a book that couldn't get to its plot line within the first chapter — over time, these change us in ways that are difficult to repair.
The point isn't to assign moral judgment — change is inevitable and neither "good" or "bad", history will write its own story — but a crucial starting point for any ethics conversation is to recognize that tech is more than a simple gadget that makes our lives easier and far from neutral.
~~~
One scenario we crafted was about the use of AI to automate x-ray imaging for medical analysis. Excerpted below:
Kendrick is a Columbus-based radiologist with more than 30 years of experience interpreting medical X-ray images to detect and classify various types of cancer. His hospital recently deployed advanced machine learning tools to feed X-ray images directly to algorithms for medical analysis. Kendrick still makes the final diagnosis but he suspects that the machine’s accuracy will soon exceed that of his own. Massive imaging data sets, combined with recent advances in computer vision, have driven rapid improvements in the machine’s performance.
One way the machine is applied to improve diagnosis is through automation of workflow processes. Traditionally, Kendrick would read images in the order they are received. The machine now identifies cases that are most likely to require clinical intervention and places those at the top of Kendrick’s queue, thereby reducing turnaround time for the most urgent cases.
Kendrick generally views this system as a helpful aid for sifting through the high volume of cases and prioritizing those that require more time-sensitive review. However, when asked if he has any ability to reorder the cases, or what dictates the order, Kendrick is at a loss. The machine does provide a brief explanation for some images (e.g. “This image contains anomalous features, which may include predictors of mortality.”) From Kendrick’s perspective, the algorithm hasn’t made any significant errors and has saved him an enormous amount of time, so he rarely questions what he sees.
AI clearly makes Kendrick's job easier by prioritizing the X-rays that come to the top of his queue, which allocates resources more efficiently. On the surface, this seems like a net positive. But then we pose other questions: What happens if the model is accidentally run on a different type of scan (e.g. a CT scan), and its output actually reorders in a counterproductive way? What risks are there for this Kendrick’s job in the future? What does that mean for the future of the radiology profession? And what is a meaningful solution to mitigate these potential harms… can a problem caused by technology be solved by it?
Professor Kelts cautions technologists and ethicists alike from getting so focused on a successful algorithm, that larger social questions get pushed aside.
"As much as we can debate the mathematics of how we are going to test whether or not the system is unfair, we could be avoiding the actual question of - what type of society we are really wanting to create here? What kind of institution? What justice system?
And is this deemed unfair simply by some mathematical check? Or is it more deeply unfair because of the sort of associations or polities we create? It's a very interesting debate. If we focus just on whether or not an algorithm properly predicts the target variable, we could be missing larger questions about what sort of society we're trying to create."
- Professor Kelts
TL;DR: seemingly technical questions are actually multi-disciplinary ones and when the stakes are high, it's important that we don't latch on to technical details as an excuse to avoid difficult moral quandaries. Data scientists need the humility to say that a technical question is actually a moral one, and social scientists need to create space for technologists to reflect on the code they are writing. Neither can resolve these high-stakes issues alone.
For a fascinating case study on inclusive AI design, read this piece on how a kidney transplant allocation algorithm's use of democratic participation, forecasting, and third party auditing could serve as an instructive model.
~~~
Bringing this back to my idle summer. I used to apologize for having the time and space to take breaks when others didn't — and while I acknowledge how lucky I am, I am no longer apologizing. Instead, I find myself unapologetically treasuring these moments of idleness. I fear that empty swaths of time, once commonplace, will become relics as the smartphone and affiliated technologies become increasingly tethered to our existence. These are the second and third-order effects of technology I alluded to earlier. Steve Jobs probably did not foresee his handy device filtering nearly every our view of our world. Yet already, we are so accustomed to glancing at our phone to satisfy many passing urges. To do truly nothing, unmediated, is becoming a modern feat.
As Brian Christian presciently states in The Alignment Problem,
"We are in danger of losing control of the world not to AI or to machines as such but to models, to formal, often numerical specifications for what exists and for what we want…
One of the most dangerous things one can do in machine learning - and otherwise - is to find a model that is reasonably good, declare victory, and henceforth begin to confuse the map with the territory."
~~~
This summer, the world on its own, unvarnished, was my model. I hope it stays that way as long as possible.
“The task before us, as I see it, is to cultivate an alternative way of being in the world.”