Does Healthcare Need More Programmers? Or, More MacGyvers?
Three stories about computer programming caught my attention this week. Admittedly, this is out of my wheelhouse; while I like to think I could have learned to program, the fact is that I never did, which probably makes me like a lot of you. We all use products and services that rely on such programs, but most of us probably don't give them much thought.
I promise I'll relate this to health care, but first the three stories.
The first was in Wired, about something called the Mover Kit. The Mover Kit is made by an interesting organization called Technology Will Save Us, whose mission is "to spark the creative imagination of young people using hands on technology." Their previous efforts let kids create their own electronic music or their own games, but they've just announced a Kickstarter campaign for the Mover Kit, which allows kids to make and code their own wearable devices.
As they describe it, "The Mover Kit is an intuitive way for kids ages 8 and over to learn the fundamentals of electronics, programming, and solve problems creatively. It encourages kids to learn by doing what they do best -- being active and playing!" TinyBop's Infinite Arcade uses a similar approach, letting kids design and build their own video games.
Pretty cool huh?
CIOs are wailing about the shortage of IT talent, calling it an "existential threat" to their companies' futures. They are realizing that they need new strategies to attract this talent. Getting kids to learn how to program while they think they are having fun sure seems like a good one.
The second article was an op-ed in TechCrunch by Basel Farag, Please Don't Learn to Code. On first blush, it would seem to be giving the opposite message that the Mover Kit and Infinite Arcade are promoting. However, Mr. Farag's message is more nuanced than that.
He concedes that programming skills are important, but not for everyone. As he says, "I would no more urge everyone to lean to program than I would urge everyone to learn to plumb" (alas, yet another skill I never acquired).
His key point is that the crucial skill is understanding the problem, especially since programming methods and languages can change rapidly. There may be a shortage of people who can program well, but I'm willing to bet that there is a far greater shortage of people with the analytical skills to truly get to the bottom of problems.
I've already written about how this may be especially true in health care.
The third story suggests that the fuss about programming may be short-lived. Writing in Wired, Jason Tanz predicts that the rise of artificial intelligence and machine learning means that soon we won't program computers; we will "train" them.
He gives an example. Gosh, and I thought debugging a program was hard; imagine trying to coach it!
If you want to teach a neural network to recognize a cat, for instance, you don't tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don't rewrite the code. You just keep coaching it.
This is not futuristic, pie-in-the-sky thinking; this is already happening. Ask Google, Apple, or Facebook. One consequence of computers, in essence, programming themselves is that their supposed masters -- i.e., us -- don't and won't always know exactly what they've done or how it works. As Google's Andy Rubin told Mr. Tanz: "After a neural network learns how to do speech recognition, a programmer can't go in and look at it and see how that happened." The programs become, as Mr. Tanz says, a "black box."
It may sound scary to have our society relying on programs that we didn't write and can't understand, but Mr. Tanz views this optimistically. He sees it as democratizing programming; it no longer will depend on nerds writing in arcane computer languages, but rather on people figuring out how to get programs to learn what we want them to do -- e.g., "It will be accessible to anyone who has ever taught a dog to roll over" (something I have, yet again, never done).
For me, the commonality of all these articles is the importance of figuring things out. We tend to view programming as a hard thing we leave to other people, but it's easy to forget that programming isn't the problem itself, but merely a means towards solving the problem.
Health care is full of black boxes. As much as we think we've learned about the human body over the last hundred years, we're still constantly reminded about how little we actually understand its working (e.g., the microbiome). As much time and money we spend training physicians, much of how they diagnose and design treatments for patients remain a mystery. And does anyone know why we always have to fill out so many damn forms?
The many organizations working on applying AI to health care are trying to figure out some of these black boxes, although their solutions may come at the price of new black boxes. I hope, though, that we don't just turn things over to AI. We still need people to figure out the problems.
That's why I love reading about initiatives like MakerHealth, about which I could have written an entire post. It started out as MakerNurse, based on founder Anne Young's realization that front-line nurses could design tools to help improve patient care. Launched out of the Little Devices lab at MIT and supported by the Robert Wood Johnson Foundation, it now has mobile spaces in numerous hospitals and a dedicated MakerHealth Space in the University of Texas Medical Branch in Galveston. Their goal is to empower not just nurses but all staff.
They believe in "democratizing the tools health making around the world. Whether it's an hospital bed, or a smart pill bottle, or a improved triage mobile phone app we believe that design should be transparent, hackable, and enabling for everyone to be the designers and makers of their own healthcare solutions." The word that often seems to be associated with them is "MacGyver."
Health care could use fewer programmers and more MacGyvers.
Figuring Out the Code was authored by Kim Bellard and first published in his blog, From a Different Perspective.... It is reprinted by Open Health News with permission from the author. The original post can be found here. |
- Tags:
- Andy Rubin
- Anne Young
- artificial intelligence (AI)
- Basel Farag
- black boxes
- democratizing the tools health making
- hackable design
- health care
- Jason Tanz
- Kim Bellard
- Little Devices lab at MIT
- MacGyver
- machine learning
- MakerHealth
- MakerHealth Space
- MakerNurse
- neural network
- Robert Wood Johnson Foundation
- TechCrunch
- Technology Will Save Us
- The Mover Kit
- transparent design
- understanding the problem
- University of Texas Medical Branch in Galveston
- wearable devices
- Wired
- Login to post comments