Choose Your Own Adventure
Rethinking risk, and moving forward with Neil Postman's wisdom in mind
“Life is either a daring adventure or nothing.”
—Helen Keller
What comes to mind when you read the words “daring adventure”? Swashbuckling pirates? Intrepid explorers scaling Mount Everest? Perhaps the face of a suave and debonair secret agent as he parachutes out of a sleek, silver airplane?
The most stereotypically adventurous thing I’ve ever done was probably going on a camping trip in the middle of my time as an undergraduate engineering student. A handful of other interns wanted to drive down to hike in another part of the Rocky Mountains, and though I barely knew them, I figured I shouldn’t pass up the experience. I remember quite clearly my exhaustion after a long day of hiking — especially my feeling of certainty that I wouldn’t have gotten back to camp without the hiking poles another intern had very graciously lent me — but I have no regrets about my decision to take a chance and come along.
There’s something deeply human about yearning for adventure — but not just any adventure, of course. We long for an adventure in which we have agency, in which we have community, in which our choices truly have an impact.1 We may not know how to find this adventure, of course. As a Millennial in the U.S., I was sold the idea that your career was the way you would “make a difference in the world.” In the age of “quiet quitting” it may seem like we’ve collectively moved beyond such silly ideas — but on the other hand, have we found a better alternative?
In fact, it seems that we have lost much of our thirst for adventure. I expect that the Helen Keller quote above still inspires most of us, but personally, when I encountered it at my local coffee shop, it struck me largely because I hadn’t actually thought of my life as an adventure in so long. Meaningful and increasingly rich, yes, and for this I am thankful — but that’s not the same as an adventure.
Considering she became blind and deaf as a toddler in the late 1800s, I imagine that Helen Keller’s concept of adventure was quite different than what springs to mind for us in the “new millennium” — but what does adventure even mean in the context of modern life? While I could probably spend thousands of words discussing how digital technologies (especially video games) have likely robbed wide swaths of our society of our thirst for real-life adventure, for now I want to consider a different perspective on this question. Specifically, I want to consider our society’s perceptions of risk and choice.
As a quick reminder, you don’t need to carve out time to read this whole long-form piece at once; each section is designed to be read in 5 minutes or so. Please do use the headers to help you come back as your priorities and “real life” allow!
One: Rethinking Risks to Children
It may seem obvious that there can be no adventure without some degree of risk, if one really considers the concept of adventure — but it seems that this connection is largely obscured by the very negative connotation that “risk” carries nowadays. Especially among parents, “risk” is almost synonymous with “irresponsibility”; even if you don’t think the risk you’ve chosen to accept for your family is irresponsible, it’s likely that someone else does, and that you feel the weight of that judgement even before it’s rendered.
But what if the type of risk matters more than the level of risk?
I unapologetically hold that it is unacceptable to subject my children — and I’d argue all children — to certain types of risk. For example:
It’s unacceptable that children are now committing suicide because they were exposed to predators who threatened to release AI-generated pornographic images of them if they didn’t comply with the terms of the predator’s “sextortion” scheme.2
It’s unacceptable that Roblox — a game which is enthusiastically played by millions of elementary-age children, including my children’s classmates — has just been described as “an X-rated pedophile hellscape” by one research organization.3
It’s unacceptable that unsupervised time on YouTube could expose my children to videos of beheadings or — admittedly more likely — videos promoting self-harm, like the ones podcaster Nicki Reisberg recently found during a quick “spot-check” she performed on her daughter’s “school-issued device” — within her daughter’s classroom.
Unfortunately, the risks that lead to these unacceptable situations aren’t entirely clear-cut. As
recently noted in this great, short piece, it’s been said that “the right age to give your kid a smartphone is whenever you’re prepared to give them unlimited access to hard-core porn.” I think this is an excellent point, but unfortunately, none of the issues I listed above would be prevented by keeping smartphones out of kids’ hands.4 Social media, that lightning rod of recent public sentiment, is really only relevant to the first.5The problem is that it’s impossible to keep abreast of, much less anticipate, all of the individual risks of digital technology. I spend a lot of my time and attention learning and thinking about digital technology, so the dangers of sextortion, Roblox, and the like were on my radar, based on the premise that it’s an unacceptable risk to give completely anonymous strangers access to my children — but even I hadn’t thought of the possibility that my first and third graders could theoretically be accessing YouTube in their classrooms (on the Chromebooks that stay at school) and having content that promotes self-harm “served up” to them. My absolute insistence that only adults control our “smart TV” has no bearing in that scenario, and I don’t think I would have even considered it, were it not for Reisberg and others like Emily Cherkin who are speaking out about the dangers of “EdTech.” The fact is, the standard “piecemeal” approach to managing digital technologies is not enough.
Unfortunately, it’s not enough to simply try to “think bigger” about the question, either. Especially recently, it seems our collective conversation is centered around figuring out how we can effectively minimize the “harmful content” that our children are exposed to across a broad range of “platforms.” Whether the proposals are technological, legislative, or both, the underlying idea is that the “titans of technology” should bear significant responsibility for ensuring that the users of digital technology, especially children, are not harmed by their use. This idea does seem very reasonable on its face, especially when you consider comparisons to other industries in which government regulation has become the norm — as U.S. Surgeon General Vivek Murthy has noted, our society has demanded that cars, airplanes, and even our food supply be made safer, and effective changes have been made in those spheres.6
However, this completely bypasses the question of why we should accept any degree of influence that is truly toxic to our children, especially during their most formative years. Must we allow our children to be formed and molded by imagery that literally leaves them scarred — even in “minimal” amounts? Why do I keep reading and hearing that it’s not a matter of whether children will be exposed to “harmful content” via digital technologies, but when? When did that become acceptable?
In his op-ed calling for a warning label on social media, Vivek Murthy asserted, “The moral test of any society is how well it protects its children.” The stark reality is that we are failing this test.
We don’t need to wait until we have a stack of studies that definitively demonstrate the link between children’s use of social media (or any other digital technology) and clear-cut harms, though it feels like a whole army of technology “experts” is trying to convince us we do. We can act immediately on the wisdom of Charlie Munger, whom Tristan Harris and Aza Raskin of The Center for Humane Technology regularly quote:
“Show me the incentive, and I will show you the outcome.”7
I’ve discussed before, especially in “The Inadvertent Abolition of Man,” how the “titans of technology” have been working tirelessly from the very beginning based on incredibly perverse incentives — mainly “consum[ing] as much of your time and conscious attention as possible.”8 We may be able to use legislation and other collective action to shift their incentives toward “protecting” children more than they do currently, but we must honestly admit that the titans of technology do not have their users’ best interest at heart — and children who use these “safer” digital technologies will continue as “users” past their childhoods. As former social media influencer Erin Loechner puts it in her book The Opt-Out Family,
“[T]he world our children live in…is not an accident. It is the work of an algorithm. It is the work of a machine and a mission, a grand strategy dreamed up by people in boardrooms who make a living by stealing a life.”9
I want to be clear — I think it is necessary to “stem the bleeding,” so to speak, with legislation and other collective action focused on protecting children. That’s why I’m involved with the group Smartphone Free Childhood, and I strongly recommend my fellow parents explore the resources and support that they offer.10 I also strongly believe that our efforts to protect children will be in vain if they are not accompanied by a more fundamental shift in the culture of our families, our communities, and beyond — and this shift will require each of us to choose differently.
Two: Reconsidering Risks to Us All
I mentioned above that Vivek Murthy, the U.S. Surgeon General, drew on the example of unsafe planes as part of his call for a warning label on social media. Interestingly, Tristan Harris and Aza Raskin of The Center for Humane Technology also drew on the concept of an unsafe airplane in their early 2023 talk, “The A.I. Dilemma”:
“This is a stat that took me [Tristan] by surprise: 50% of A.I. researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control A.I. Say that one more time — half of A.I. researchers believe there’s a 10% or greater chance [of extinction] from humans’ inability to control A.I.
“That would be like, if you’re about to get on a plane, and 50% of the engineers who make the plane say, ‘Well, if you get on this plane, there’s a 10% chance that everybody goes down.’
“Would you get on that plane? Right?
“But we are rapidly onboarding people onto this plane...”
The thing is, no matter how much more convenient it is to travel in an airplane than it would be to travel by car, ship, or train, airplanes are generally perceived as an optional technology. If a plane isn’t safe, we don’t hesitate to “ground” it, as in the case of the Boeing planes that Murthy references, or simply not get on board, as Harris and Raskin imply. We understand that there are other ways to travel — even other planes in which to fly with much lower risk — and we act accordingly. And yet, as Harris and Raskin point out, we do not act accordingly when it comes to certain more “innovative” types of technology like social media or A.I.
It’s actually quite striking how our society’s conception of the “risks of digital technology” has evolved. A few short years ago, we collectively wondered how much of a problem it was that looking at our phones too much could damage the alignment of our necks; nowadays this idea has fallen off essentially everyone’s radar. It’s not that this was proven to be an unfounded worry; we simply don’t care anymore. Though we probably don’t consciously admit it, we’re willing to sacrifice our physical well-being to the way of life that has become “normal.”
Even more distressingly, we’re willing to risk our most fundamentally human qualities to this way of life. You may have read
’s brutally honest piece, “On The Degrading Effects of Life Online,” and might feel that much of it doesn’t apply to you personally because you’ve cut social media out of your life — but we can all feel the truth in and ’s assessment of how digital technology in general is making us less human. For example, in their recent conversation with The Beatrice Institute’s , Ruth explained:“[I]n which basic sense does [digital technology] make us less human? I think, at two most basic levels: one is it corrupts our relationships, and two, it corrupts our attention, and both of those go together. I’ll start with attention. I think the average college student now switches tasks every 19 seconds; you can’t even brush your teeth within 19 seconds. Which means we are so distracted — helplessly distracted — and enslaved by these devices that we can’t attend. If we can’t attend, if we lose all train of thought, we can’t attend to each other; we can’t attend to what we are reading; we can’t learn; we can’t produce quality work; we can’t attend to democracy, because we can’t attend to what is true, what is not true. We can’t attend to scripture or our prayer. So, I think it’s making us less human because it corrupts our attention, which is necessary for everything that we do.
“I think most importantly, it translates into our relationships. It always kind of pains me when, for example, I see young mothers — and this is not to blame anyone, it’s just an observation — when you see young mothers, who are nursing their child, but they’re not looking at the child at their breast, they’re looking at their phone because it feels like downtime, and maybe I can spend some time on the phone. This is the most basic relationship, and we see it in lots of small ways, where things kind of just get interrupted and we don’t look at each other or we don’t take the time to speak with each other. I think that’s the most fundamental part in which it is making us less human.”11
So maybe we can also take the airplane comparison one step further: much of our society’s current approach to digital technology is akin to frequenting an airline that gives us no flight information whatsoever. Each of us is expected to enthusiastically board an airplane without having any idea of where it will take us — and despite having a sinking feeling that it will take us somewhere we don’t want to go. Maybe we’ll end up somewhere entirely foreign and genuinely dangerous, or perhaps we’ll simply find ourselves somewhere far from our friends and loved ones, but pleasant enough otherwise. Regardless, we’ll have to find our own way to the destinations we really want to reach — if we can even muster the focus to figure out what those destinations are, amidst the din of the airport. After all, the airline is in the business of taking us where it wants us to go, and making us believe that’s where we should want to be — not empowering us to take the journey that’s actually in our best interest.
Why, then, do we still “get on board,” and bring our children along for the ride? I think the answer lies in our perception of choice.
Three: The Invisible Choice
I have a controversial argument to make: our society obscures our full range of choices about digital technology, and warps our sense of the benefits that “abnormal” choices could bring.
Now, I expect many of us would instinctively object to such an assertion; it is unsettlingly close to an assertion like, “most people believe we have no choice but to use harmful digital technologies.” After all, though “tech addiction” and related terms have come into mainstream use, no one wants to compare himself with the stereotypical drug addict, who chases his next high no matter the cost or the danger of overdose. No one even wants to admit that they effectively act as though digital technologies are “kind of, like, the only way to communicate,” as a faceless teenager declares in the powerful trailer for the new documentary series “Social Studies”. I know I’ve felt somewhat reticent to use the term “addiction” in my writing — possibly because I’ve seen how crippling true tech addiction can be in the cases of two of my old friends from high school. No, for most people I think the reality is somewhat subtler.
In the final weeks of 2023, Cal Newport (author of Digital Minimalism) penned an essay in The New Yorker that helped me make sense of this more complex dynamic. It was titled, “It’s Time to Dismantle the Technopoly,” and in it Newport argues that, “As technology accelerates, we need to stop accepting the bad consequences along with the good ones.” The title alludes to what Newport calls the “masterwork” of communications theorist and cultural critic Neil Postman — the book Technopoly: The Surrender of Culture to Technology. When I was considering these questions earlier this year, a vague recollection of Newport’s essay led me to pick up Technopoly, and I couldn’t help but agree that it is a masterwork. Written while Meta’s founder Mark Zuckerberg was still in elementary school, Technopoly traces our society’s dysfunctional relationship with technology back to well before the advent of the Internet, and shows that the eventual consequences of our society’s relationship with technology are not the result of some inscrutable confluence of factors.
As Newport summarizes in his essay, we are long past what Postman calls “technocracy,” the era “in which technology made a bid to become more important than the values that had previously structured existence.” That was the era of telegraphs and railroads. We now live in the era of “Technopoly,” the result of technology’s dominance over “all forms of cultural life,” which Postman argues began well before he was writing in the early 1990s.12 As Postman explains within the book itself:
“[Within technocracy] two opposing world-views — the technological and the traditional — coexisted in uneasy tension. The technological was the stronger, of course, but the traditional was still there — still functional, still exerting influence, still too much alive to ignore.…In a word, two distinct thought-worlds were rubbing against each other in nineteenth-century America.
“With the rise of Technopoly, one of those thought-worlds disappears. Technopoly eliminates alternatives to itself in precisely the way Aldous Huxley outlined in Brave New World. It does not make them illegal. It does not make them immoral. It does not even make them unpopular. It makes them invisible and therefore irrelevant.”
Quite simply, our society has moved from developing and using technology for the betterment of humanity, to developing and using technology for technology’s sake — without fully understanding how such a shift would affect us or our children.
This shift occurred without the conscious choice of society’s individual members, just as Technopoly continues today without people’s conscious choice to participate. In fact, if Postman is right in dating the dawn of Technopoly close to 1910, essentially all of us alive today are so far removed from this shift that Technopoly is all we’ve ever known. That’s certainly how it feels to me; Technopoly is simply the default position, the way things are. Outside of very intentional communities like the Amish, there is no rite of passage in which we each decide whether to choose Technopoly or a different way of life.
We’re told that technology has ushered in an era of unprecedented choice — never before have we been able to choose from among so many different cuisines, so many different potential “partners,” so many different worldviews — but what about the worldview that questions the universal adoption of all this technology? Technopoly renders it irrelevant. As Newport writes, “Decades of living in a technopoly have taught us to feel shame in ever proposing to step back from the cutting edge” — and for so many people, the idea of stepping back from the cutting edge is simply outside the realm of anything they’ve ever considered.
I believe that Newport is right that in recent years we’ve seen growing “rejection of technopoly’s demand that we accommodate and embrace whatever innovation we’re told to be excited about next” — and for this I am grateful — but unfortunately, I believe he’s also correct that “this new attitude” is far from widespread. Of course, rejecting the status quo is much easier said than done; even after we’ve come to terms with the immensity of the problem, we’re left to make — and then hold fast in — an endless series of deeply counter-cultural choices. It can feel exhausting to even consider the challenge involved in keeping this up, especially as technology will certainly continue evolving and creating new problems as the years pass.
This is where I think the concept of Technopoly can be deeply, practically helpful, especially for those of us who struggle to move forward without an answer to the question of how we got here. If our society has been operating as a Technopoly for far longer than most of us have been alive, it’s perfectly reasonable that what seems glaringly obvious to some of us is anything but obvious to so many other people. If our societal malaise is more than a century in the making, we don’t need to wonder how we’ve gone so far “off the rails” in the less than two decades since the introduction of the iPhone. And if Postman is right, we’re not facing an unending struggle to understand and then defend against a myriad of separate harmful technologies — we’re simply encountering the various ways a pernicious ideology manifests after a century of primacy.
We’re not engaged in an endless, hopeless battle, desperately trying to protect the things that matter most from the most powerful companies the world has ever known — nor are we surrendering to the dictates of Technopoly. We’re choosing our own adventure.
With this mindset, I believe we can summon the courage and resolve to choose differently, even when it feels difficult or uncomfortable — and joyfully show our families, friends, and broader communities that they too can choose a different life.
Four: Taking Action Together
Of course, there’s still the question of what exactly “choosing differently” looks like in practice. I appreciate Newport’s call to “act on what we see,” but what does that mean for each of us who is living an “average” life? There’s a whole wealth of insights to unpack from Technopoly, and I hope to spend more time doing so in the near future within the context of a collaborative “read-along” — but for now, I’ll focus on one aspect of the “assumptions and thought-world of Technopoly” that Postman outlines shortly after the quote above: “What cannot be measured either does not exist or is of no value.”
In the world of Technopoly, we are taught from an early age that the work and accomplishments that matter are measurable. What starts with “letter” or numerical grades in elementary school (or earlier) evolves naturally into the expectation that we will set our goals and report our contributions in very concrete terms — though in fact, there’s nothing “natural” about the idea that you can assign a quantitative value to human thoughts, as Postman explains.13 Unfortunately, I learned Technopoly’s lesson well over my many years in school, and it would be easy for me to judge my “success” in 2024 based on the growth of the Digital with Discernment archive or subscriber list. And yet, a year after the Gaskovskis’ initial publication of their “3Rs,” I can see that I’ve learned and grown a great deal in ways that defy quantification. For one, in the past year I’ve devoured books like someone who’s finally rejoined a feast after a decade of settling for snacks. Even more significantly, I’ve discovered that the Gaskovskis were right — you can “return” to the joy of embodied, human relationships, even if you feel like you’ve never really known it apart from the interference of digital technology. Though I could never measure it, I’ve found immense value in learning to strip Technopoly of its power over “just” me personally — especially its power to divert my attention away from those things that have always mattered more than technology.
Put another way, we must not accept Technopoly’s definition of the “highest impact” ways to spend our time. Of course, this will look different for different people, and in different seasons of our lives. For me, in my current season, rejecting what Technopoly has taught me about how to spend my time involves a lot of reminders that C.S. Lewis was right: “[W]hat one calls the interruptions are precisely one’s real life.” This looks like stopping in the middle of drafting this section to go get my toddler from yet another failed attempt at a nap, and pausing my editing of this section to save my toddler’s toy from under our deck (several days later, to be clear). It means finally admitting to myself that focusing on my family, especially my school-age children who are away for the day, is a better use of my evening hours than reading on my phone, no matter how much more edifying Substack articles are than the “content” I used to endlessly scroll.14 It includes taking my toddler seriously when he becomes upset that I have my wireless earbuds in (yet again), rather than insisting that listening to a podcast while caring for him is simply a harmless way to use my time “productively.”15 That’s not to say that I’ve found the perfect balance, of course — I have plenty of work to do around my management of asynchronous communication, for example — but it is to say that you can find your personal and family rhythms and then determine how “technology fits in around that,” even if you don’t have a “pre-technology” reference point from your own experience.16 Though you’ll never be able to quantify the benefits of doing so, I promise it’s worth it.
I’ll be honest — the next adventures I see for myself, just over the horizon, have me feeling a bit like Tolkien’s Bilbo Baggins — who, upon encountering the wizard Gandalf just outside his front door, calls adventures “nasty dangerous uncomfortable things.” Still, as my kids continue to march closer to the ages at which we’ll be grappling with group chats, the prevalence of online pornography, and more, the choice will become unavoidable. On the one hand, I could congratulate myself on all I’ve accomplished in my own home, and forego the risk of being considered strange within my “real-life” community, in exchange for unacceptable risk to my kids — or, I can begin having the uncomfortable conversations with my kids’ friends’ parents about how we can all choose a better way for our families. Technopoly wants me to think that the “likes” and comments online matter more, but I know which choice I must make.
I imagine I’ll continue to feel a kinship with Bilbo as I pursue the second option — “to the end of my days” unable to remember exactly how I began, and feeling woefully underprepared as I set out — and to some degree, I think this feeling will be justified. Even back in the 1950s, Tolkien’s friend and fellow Inkling C.S. Lewis sensed that “the greatest of all divisions in the history of the West” was underway, and the change had “probably not yet reached its peak,” though it had already created a “chasm” somewhere between the 1800s and the present.17 All of us on this side of the chasm are probably right to feel at least a bit disoriented or nervous — and yet, as the wizard Gandalf tells Bilbo’s nephew many years later, “all we have to decide is what to do with the time that is given us.” No matter what lies just beyond the horizon, I’m confident that we can find our way together with our fellow Technopoly dissidents. After all, life is either a daring adventure, or nothing.
Thank you for reading this edition of Digital with Discernment! I’d be honored if you’d take the time to help continue the conversation about Neil Postman’s Technopoly, or the concepts of risk and choice that I covered earlier, by leaving a comment or sharing this piece. I’d also love to hear how you’ll be choosing your own adventure :)
Thank you to
Gaskovski for his exploration of this concept in “Build a Songbird Compass: Agency, Communion, and Tech”.Everyone — and certainly every parent — should take the 40 minutes to listen to this episode about “deep fake pornography” from the “Your Undivided Attention” podcast. I got through the bulk of it in the course of doing one evening’s dishes (and a bit of laundry).
I thought the July Bloomberg article “Roblox’s Pedophile Problem” was damning, but this even more recent report from Hindenburg Research is even worse. The latter is a quick read and every parent should take a few minutes to engage with it — though not with your child reading over your shoulder. The full report with screenshots can be found here, and if you are certain you won’t be interrupted, especially by children, Hindenburg’s deeply disturbing video compilation of their findings can be viewed here. (It’s linked in the full report, with a note that Hindenburg had to move it to X because YouTube flagged it as inappropriate almost immediately.)
Mitigated, to some degree, but not prevented.
YouTube is sometimes considered social media, and rightly so, but in this case I consider it to be outside the scope of “social media” because exposure to these types of toxic videos is in no way contingent on having an account on YouTube, producing one’s own “content,” interacting with others within the comments section, etc.
See his initial New York Times op-ed; he’s also spoken in many other forums since.
See the most recent episode of the “Your Undivided Attention podcast,” as well as their discussion of the first AI Insight Forum in Washington from just over a year ago (which includes a transcript).
See this interview with Sean Parker, the first president of Facebook (now Meta). See also this post from the organization Smartphone Free Childhood with quotes from a variety of leaders in digital technology.
From pg. 3 of The Opt Out Family: How to Give Your Kids What Technology Can’t by Erin Loechner (I expect I’ll have many more quotes to share from this book as I continue to work through it; I’m really enjoying it so far.)
This link will take you to the U.S.-specific organization, but many readers outside the U.S. can connect with similar organizations where they live. Smartphone Free Childhood was actually originally founded in the UK, and you can find their site here. See also the list of “Aligned Organizations” on the website of Jonathan Haidt’s book The Anxious Generation.
Check out the full interview, and the Gaskovskis’ introduction to it, for many more edifying insights — including Peco’s explanation of how digital technology robs us of our full sense of agency.
Note that although Newport writes “technopoly” with a lowercase “t” in his essay, I will follow Postman’s convention in the book itself and use the uppercase “T”, except when quoting Newport’s essay directly. Italics will distinguish when I mean the book title rather than the concept.
Postman’s discussion of the “technology” of “assigning marks or grades” truly blew me away, but unfortunately I simply don’t have the space to cover it in more detail here. Note that the end of the sentence is nearly a quote from Postman (with the wording rearranged); see pg. 13 of the edition of Technopoly cited above.
It should be noted that Substack “Notes” is also a dangerous “place” for those of us who have struggled with the habit of endlessly scrolling. I’ve read some claims that Substack isn’t designed for or conductive to such scrolling, but this is simply not true in my experience. I had to uninstall the Substack app after seeing my screentime metrics from the one Saturday I had it on my phone.
And yes, I know it was the earbuds, because on multiple occasions he checked both my ears to make sure I wasn’t listening to someone else through my earbuds.
I really appreciated Peco’s description of “routine hygiene” (from which the phrase in quotes above is drawn) as well as Ruth’s follow-up, beginning at the 28:14 mark of their interview with the Beatrice Institute.
See Lewis’s “De Descriptione Temporum” (Latin for “On a Description of the Times”), his “Inaugural Lecture from the Chair of Mediaeval and Renaissance Literature at Cambridge University, 1954”. It’s especially interesting to note that Lewis and Postman agree on the timing of this “chasm,” though of course they approach it from different perspectives, each according to his own expertise. Many thanks to
for bringing this work of Lewis’s to my attention.
Well done! Keep it going!
Thanks for this.
With regards to the conversation about "risk", my wife and I have found it illuminating to think about risk not as a single variable that occurs along a continuum, but instead as a composite composed of several distinct, non-overlapping components.
To illustrate: we try and make out family life relatively tech-free (ie we don't have a TV, don't use screens for our kids, and plan on delaying smartphones for a long, long time). In doing this, are we "sheltering" our kids? In a way, yes! Absolutely! There is no shame in that for me. We are sheltering them from what seems like very acute spiritual and psychological risk.
By contrast, although our kids are pretty young (the eldest is 5), we try to encourage independence and physical exploration quite a lot (running around with sticks, climbing and jumping off things, etc). We have a relatively high tolerance for physical risk. Similarly, when there is conflict or friction in their relationships, we try to let them deal with it themselves - thereby incurring some degree of "social risk".
As we've reflected on this, what he seemed so strange to me is that, compared to many of our friends, we have a much lower tolerance for spiritual risk and psychological risk (to the degree that family members tell us that "you can't shelter your kids this way forever!", implying that we are overreacting or being unreasonable) but a much higher tolerance for physical and social risk (judging by the way some people look at us as I throw my 2-year-old 4 feet in the air, or the nervous noises people make as they climb on the playground). I think part of it is that the spiritual/psychological risk either seems invisible (or worse, inevitable) to many, while the physical/social risk perhaps seems like something they are more able to control. But it remains puzzling to me.