For Africa, Ebola may be here to stay

In the world of viruses, Ebola is like a chickenshit kid with a gun. Chickenshit because unlike its more deadly rivals (virals?) it hasn’t managed to evolve to be airborne – think like most hantaviruses like H1N1. With a gun, because it’s especially deadly (for a chickenshit). So deadly that it works to its own evolutionary disadvantage by wiping out its hosts quickly – too quickly. Usually devastating whole rural villages and then limping back into obscurity when it finds no new victims. Ebola is not new – it’s been around for a long while. I read about it first in Reader’s Digest when I was a child; about a really bad incident that terrorized rural Congolese villages sometime in 1976. In every case that I am aware of, Ebola burnt through the population and then ran out of steam. And its’s been usually confined to remote, forest adjacent villages.

But what we have right now is entirely new. Ebola has finally jumped the rural-urban border into the hot, dense cities of West Africa*. One of the chief features of these cities is pretty terrible health infrastructure. Being African myself, it’s still hard to fathom how vulnerable people are to health issues, how non-existent health insurance is, how woefully inadequate health institutions are, and how utterly at the mercy of disease most people are. I’ve had family members expire prematurely because of terrible hospital care, a misdiagnosis or even simply superstition (that made getting healthcare much tougher).

So a disease that is ordinarily chickenshit and should be fairly easy to control in a place with decent healthcare management systems, is loose in urban Africa. Lagos alone has over 20 million people packed into it pretty tightly. The implied epidemiology of Ebola (that I am aware of) means that it will have willing hosts and high probabilities of transmission in an urban setting that is not decisively controlled by healthcare authorities. This very weakness will also mean that the population will have to take it upon themselves to do their own protection, changing everyday activities like shop keeping, buying and selling, sex, and other mundane activities. Basically sowing grave distrust into mere commerce and everyday existence which requires human contact. Someone should be calculating the economic damage, right about now.

I am not aware of any active viral disease that has been managed successfully in an urban setting in Africa without deploying vaccines. When I was growing up, the most reliable solution to the most terrible scourges (Yellow fever, Mumps, Measles, Polio, etc.) was to get everyone vaccinated. African countries can usually handle that. It’s a one-time investment, almost latent and means that you can come into contact with the disease with relative impunity. And if you didn’t get vaccinated, oh well; you were warned sucker! Given that Ebola requires active management of a kind that may be scarce, one struggles to see how it can be cleanly wiped out in the absence of a vaccine.

If all this stuff is true (ok, big if), one plausible scenario is that Ebola becomes a low grade urban disease, never actually being wiped out, occasionally offering the illusion of complete control, but jumping out of blind corners to claim a handful of lives every few weeks and managing to soldier on. Combine that with the possibility of rapid evolution and mutation (now that is has more room to stick around), you have possibly unprecedented potential.

The debate over Zmapp – the experimental Ebola drug – has been kind of annoying, if misguided. Of course West Africans should have Zmapp, stat! Really, how much more valuable is an American or European life than an African one? The only real gate, which honestly is one that has to be self-assessed by African countries, is whether they have a health care system that can handle its efficient distribution given its high requirements for preservation in transit and administration. And if any pharma house wants to profit off a pandemic, I’d like to see them try. Regardless if I was premier in any West African country I would be setting aside a lot of money to improve health care systems, buy Zmapp and push for a vaccine. This stuff may be sticking around.

Courtesy of Vox media

*This by the way, should have been entirely predictable.

Can you bequeath your digital assets?

Last year I wrote an article about ownership being the last rubicon for digital content. Seems I was prescient. I knew this issue would break new legal ground but I didn’t know how much. Looks like things are heating up on this front: from a Slate article: who owns your iTunes library after your death?. Go read it, it looks like these digital content purveyors have EULAs that are basically mean that you don’t own any of this after you expire.

Here is the key graf:

The Delaware law raises the complexities of how to deal with the accounts that house our e-book collections, music and video libraries, or even game purchases, and whether they can be transferred to friends and family after death. The bill broadly states that digital assets include not only emails and social media content, but also “data … audio, video, images, sounds … computer source codes, computer programs, software, software licenses.” However, the law says that these digital assets are controllable by the deceased’s trustees only to the extent allowed by the original service’s end user license agreement, or EULA.

If you’ve read your Kindle or iTunes EULA, you’d know just how little control over your e-books or music you have. Every time you hit “buy” at the Kindle store, you are not purchasing an e-book; you are licensing it for your personal use only. Even if you reread your e-copy of The Hobbit twice a year for 10 years, you are no closer to owning it, and without Amazon’s permission, no closer to being able to hand it down to your children. Professor Gerry Beyer at Texas Tech University says that the Delaware statute does not override this feature of Amazon’s, or most, EULAs, which are protected by other forms of federal law. “The bill is not designed to change an asset you could not transfer into one you can,” he told me.

This stuff needs to change. Its preventing me from going full hog on digital purchases.

The death of Facebook Home

When Facebook Home launched many moons ago in the yesteryears of 2013, I quickly panned it. The value just did not stand up to strategic scrutiny. Other cheered of course, the demmed gullible tech press. I even did a follow up post thinking about what FB Home would need to be to actually become a player. But now we hear that my prognostication was right on the money and FB Home will innovate no longer. This was highly predictable, FB Home needed innovation along very different dimensions that FB is setup to do at this time (web not OS). Understanding why some things will work and others won’t (generally) is a key part of technology strategy. And this is what is missing from a lot of tech press analysis.

R.I.P. Facebook Home. It was sorta nice knowing you.

The coming Nigerian 2105 election clusterf#$k

p5rn7vb

Whatever you think you know about Nigeria’s politics and the latest goings on, you have to admit that the next election is shaping up to be a humdinger. To understand it all you need to go a ways back, almost to 1914 when Fred Lugard, a British soldier, mercenary and ultimately a huge wanker; decided to join the north and south of the country under one political administration (fwiw, this link is pure opinion, not even sure I agree but it has the basic history right). These two culturally unalike parts of the African continent were from then on condemned to solider on as one country, each part hating the other guts and each with a knife in its back pocket and a fake smile on its face.

During the struggle for independence from the British, everyone from every region in Nigeria pitched in and thus after it, people expected Kumbayas. But the cultural differences were deep and the normal back and forth of politics eventually became mortal combat. In retrospect, the biggest mistake the founders of the modern Nigerian republic was NOT to renegotiate the terms of engagement and polity and maybe even to think about an amicable separation when there was little at stake (although imperial governments would probably have tried to put a stop to that). They did however setup a loose federalism which seemed to work initially. But cracks emerged quickly:

  • The country’s economic centers (and areas of highest opportunity) were not culturally centered and became forced melting pots that condensed the politics of resentment.
  • Federal resource and capital allocation (appropriations) became political football, won by whoever was in power because the appropriations process was mostly executive driven.

All in all, these cultural stresses, exacerbated by the need to share one central power lever, has led to almost all the problems Nigeria faces as a political entity: coups, military rule, corruption and stagnation; name it. Imagine the French and The German having to govern one country in Europe….yup, just blew your mind.

In 1999 as stagnation was met with a public hunger for some kind of stability, Nigeria transitioned once again into a democracy. The compact for peace at that transition was essentially the following – the north and the south would do a power sharing agreement where they would rotate power every 2 terms (incumbents always won reelection and were only tossed out by term limits).

So, here comes the curve: after the south gave up power after a first 2 term stint, the north picked it up as expected. But then the Northern president died in office and his southern vice president finished his term and then ran for reelection and won. The incumbent has argued that finishing the partial term does not count and is going to run for a second full term. If you’re counting, that would be 13 years of Southern rule and 3 years of northern rule across 4 presidential terms with a 5th term for the south in the offing. So basically the north was gypped.

There is a ton of speculation that the current fiasco with Boko Haram is simply a politically motivated calculation to destabilize the current government. Regardless of whether you believe that or not, the fact is that the current government has lost immense credibility in the North by not finding a way to deal with the attacks effectively. Even in the south, there is a lot of disenchantment (the South is not uniform culturally, there are two main power blocks that compete). So to recap, the current president has been delegitimized in the north of the country by design or circumstance. The south west is completely opposed by and large but may not have the candidates to file effectively against the incumbent. Since incumbents tend to win (mostly by corrupting elections), if the current president wins re-election or does not, there will be some serious hell to pay on the streets of Nigeria.

The irony is that Nigeria’s macroeconomic trend is very positive. The only real argument that can be had is the pace of it and how quickly other problems like human rights, press freedom, literacy and so on are being tackled. But alas those pesky pocket book issues are standing in the way. The people will not eat GDP growth especially when starting from such a tiny economic base.

Simple answers to complex questions: Why did Apple buy Beats by Dr. Dre?

I’m starting a new series called simple answers to complex questions. It’s imagined along the same general principles that undoubtedly led to the principle of parsimony called Occam’s Razor. I will delve into many seemingly perplexing questions which likely has tons of experts or even academics commenting and opining on all sides. And will offer a well-reasoned answer with relative philosophical economy or simplicity. This answer may be well researched and presented with data or may be a shot from a gut. But overall it will have the distinctive ring of truth.

So let’s get to it: Why did Apple buy Beats by Dr. Dre, even though on the surface it does not seem to be a reasonable fit.

Simple Answer: Apple bought beats so that it can continue to sell its expensive product to poorer people, continuing its incredible growth.

Explanation: Apple is sitting pretty right now, but there are some worrisome long term trends: Growth is flat lining, iPhone seems to only be attaching to more affluent people (who are fewer in number) and the market is seeing an explosion of Android phones in the lower priced segment of the market. I surmise that Apple would like to keep its premium cache but not have poor people ignore its gloriousness. That is, it wants to eat its cake and have it. This is a stunning proposition – the luxury segment is usually content to be a niche high margin segment, but Apple wants to buck the trend.

<iPhones are bought more by the affluent. Intuitively that makes sense. Courtesy of mapbox.com>

So what is required? Some way to sell a luxury product to poor people. To keep the margin AND the growth. By all accounts Beats products are fairly pedestrian in terms of quality. However they excel at marketing the product via celebrities and music icons (which leverage the relationships of its founders) to a wide swath of the non-discriminating populace. I don’t have proof of this but anecdotal evidence shows that Beats sells a lot of its products to fashion conscious poorer consumers. For these people, it’s a coveted personal accessory. If you think about it hard, you will realize that there are only 2 companies that have this ability to sell overpriced product to poor people based on the brand equity they have developed – Nike and Beats by Dr. Dre. And only one of them is on the market.

Bottom-line, Beats knows how sell luxury products to less affluent people. Even better than Apple can (and it’s pretty good too).

Megyn Kelly raised some questions. Just not the ones you think.

<This post was delayed for 6 months. I hate piling on a bandwagon of criticism. TL;DR: exporting dumbness down the barrel of Fox news is not a good look on the USA>

This is a rush post because somehow it feels wrong to even talk about this in the coming new year. You have of course heard the refreshing erudition of Megyn Kelly of Fox News. She of the “Santa and Jesus are white, get over it!” declaration. And the disingenuous “I was only joking!” when the internet basically slapped her around a bit. The ignorance on display is undeniably astounding and can be demonstrated historically but also thematically; given what these two figures represent to the entire planet. But even worse than the ignorance is the absence of empathy and awareness; that words uttered on a news show broadcast by satellite in this country and others should be sensitive in handling such matters, regardless of what the truth is. And this is even more damning given that the context of the news gaffe was a refutation of an attempt at inclusion, an article written by Aisha Harris.

The thing is that, as sad as this episode is, it raises common questions about America that a lot of folks don’t hear and don’t seem to understand that the world is asking more and more:

  1. First given the world’s best education at the college level, what makes it possible for someone that is college educated to be so basically ignorant? Megyn has an undergrad degree in political science and a JD.
  2. Secondly, if someone is that oblivious (and God knows that you scrape around for some reasons), why would you make them one of the major faces of your company i.e. Fox News? Were there not a metric ton of blonde news personalities (let’s not even go into the population of women with other colored hair) that were smarter who wanted the job?
  3. Thirdly, why O why would an employee of a reputable media company not give some thought to the global implications of her statements? Fox News broadcasts worldwide. How is it globally aware to make such statements about an essentially global culture when only a fraction of the world is “white”? True or not, does the anchor not realize that Chinese, African, South American and South Asian kids either are Christians or look forward to experiencing the magic of Santa? And in so realizing, the need to proceed with way more caution and circumspection, instead of asserting a declarative statement on skin color that at best could be open to debate?

These questions and their ilk riddle the mind of people who do not live in N America; and both the plausible answers and the lack of them cause serious misgivings for those who ponder these things. To be succinct, the mind boggles. And not in a good way. For a world whose standard for journalistic excellence is shaped by the bland integrity of Larry King and the gritty truthiness of Christiane Amanpour, this is pretty puzzling behavior that makes them look askance at Fox News, but also in some ways, the entire country. One does not expect this kind of nonsense from the land of the free.

Android is a Pain in the Ass

<small edits for clarity>

I’ve been doing a lot of mobile stuff recently and have every phone there is: #WindowsPhone, #Blackberry 10, #iPhone, #Android. These are generally all test phones without a SIM operating over wifi. For productivity and calls, I rock #WindowsPhone and a bit of #iPhone. So rewind 2 weeks and my main WP squeeze commits hari kari (screen shatters) and I’m in a bind. I can’t afford a Lumia Icon and my low end Lumia 625 cannot keep up with me of Windows Phone 8.1 OS update. I’m losing minutes waiting for screen lag and that will not do.

So I decide to get a new #Android Phone. The one I’ve been using was a loaner anyway and I needed something modernish. I had previously encountered a phone manufacturer which was doing plain Android on great hardware (bought one for a friend) and decided to get one, nice price point. Behold the Blu Life
Pure. It’s a good looking phone and I still get plenty of compliments about it.

It’s a giant pain in the ass.

Nothing to do with the hardware (go ahead, check the specs), although the lack of 5 GHz wifi is puzzling. The fault is in effing #Andriod 4.2.2. It’s clunky. How can I count the ways?

  1. When I compose an email, the ‘send’ button is way on top instead of near my holding fingers at the bottom of the screen where every decent OS puts it.
  2. Saving a phone number from a text message to your contacts is a mysterious task with umpteen clicks. I still don’t fully understand it. I tried once to save an email to a contact from an email message in my inbox… let’s just say I won’t try again. I’m not even joking. I don’t want to learn how anymore. Path of least resistance and pain.
  3. It keeps asking me which app I want to use to open stuff. A lot. Different file types. I’m new with the apps, so sometimes I click ‘just once’, and lo and behold, it keeps asking me forever (I know duh, but really annoying). Why won’t it pick a default and allow me to change it if I care?
  4. It lets everything run in the background and saps my battery life. An app manager is required.
  5. Multitasking… yeesh. Tap twice to see the cards (vs, once in other platforms or maybe I’m doing sth wrong..). Flick them upwards and somehow the app is still running in the background… The bundled app manager cannot seem to close things permanently..I could write a dissertation on this one.
  6. I see new text notifications. I open the message app. It has a preview of all messages and I can click into each individual one if I like. I close the app, preview was good enough. It still shows that I have new messages. It wants me to go into every message and open it before it will wipe away the notification…
  7. Somehow defaults are wack. I don’t need to be asked each time which account I want to save my new contacts to. It’s the same one silly, my default account. The one I use all the time. The one I answered the last time.
  8. Keyboard…. I use keyboards across 4 OSes. I’ve never made so many errors in my life. For some reason I cannot seem to get the space bar to register as fast as I type and I keep mashing words together and of course mashed words do not autocorrect….

I could go on, but you get the gist. They’re small things that add up. At first I wondered if this was mere kvetching on my part, the growing pains of change. But after listing the individual crimes, I’ve come to the conclusion that this is a case of refinement. Android 4.2.2 feels unrefined and like Windows Vista. Full of sharp elbows and lacking a grand unifying vision and design language. I don’t even know why people keep complaining about skeuomorphic design in iOS, when in contrast there seems to be lack of any kind of consistency to mock in Android. I’m a geek and I can fix all these things. I know the great thing about Android is the open ability to tweak. But I’m slightly resentful that I have to. I’m busy goram!, it’s a demmed utility not a fetish.

Now the fanboys are also thinking: you need KitKat, aka, Android 4.4. Maybe, but I don’t buy it. This is less an issue with good code and more about good design. I can’t wait to check out KitKat but I’m not holding my breath.

I’ve come to the conclusion that Android dominance is simply due to 3 things: great hardware, proliferation of good OEMs and price. That’s a useful lesson to learn. It’s Windows NT all over again.

Evolving Internet standards beyond ‘rough consensus and working code’

One of the more useful things about science fiction (and I am enamored by science fiction) is not just the science or the fiction but some of the more thoughtful plot points used to drive the larger narrative. One such remarkable one is the “three laws of robotics” construct invented by the science fiction maestro, Isaac Asimov. Today saying ‘robots’ is so passé, its 2014 after all. However when Asimov came up with the concept in 1942 or earlier, it was nothing short of a monumental leap in imagination – there had been no silicon revolution, no computer revolution (ENIAC, UNIVAC and their ilk) and we were barely beyond the steam economy to the combustion engine economy.

So it’s amazing that Asimov did not immediately just explore the still wonderful ideas of mechanical machines that did men’s bidding. He also discussed a set of ethical and philosophical ‘laws’ that needed to be met before any of those machines came into existence or were manufactured. This was a realization that what was at stake was not just a matter of technical construction finesse, but a matter of purpose and principle.

In contrast, when the pioneers of the current internet started its thinking and construction, they turned to Requests for Comments (RFCs) as a way to build common functional consensus. An RFC, according to Wikipedia, is “a memorandum describing methods, behaviours, research, or innovations applicable to the working of the Internet and Internet-connected systems. ….The IETF adopts some of the proposals published as RFCs as Internet standards.” Thus RFCs are much closer to functional specs than anything else; in fact they are often unadulterated input into developer work items – when I was working at Wind River Systems, the developer I worked with wrote PIM-SM entirely from the RFC (the correctness of the implementation remains to be seen). This stems directly from the ethos of the early work of the Internet Engineering Task Force (IETF), one of the main standard making bodies of the Internet. Its earliest organizing mantra was the following poem: “We reject presidents, kings, and voting. We believe in rough consensus and working code.”1. This basically libertarian credo has been the main ideological underpinning of internet systems for a very long time. But it’s not just libertarian, it’s also a very low level and functional point of view, and summarily dispenses with goals and principles of the complex systems created by these standards. In short the creators and sustainers of the Internet systems have given very short shrift to inveighing on the purposes that their creation can be put to.

I don’t want to sound judgmental. By basically taking a tabula rasa view of the internet, its progenitors have allowed marvelous things to evolve from the primordial slime of TCP/IP. Using another analogy, toddlers are told what to do, it’s only as they age into adulthood that goals are introduced in order to allow scope for their complex ingenuity to drive multiple alternative actions.

The internet has reached adulthood. We have very complex emergent behavior that is connected to potentially billions of devices that in turn touches the lives of billions of people. Creating standards that mandate certain principles and goals and integrate ethics is crucial for humanity to maintain some kind of control of the direction of global innovation in internet connected technology. We are already living in one of many alternate realities of innovation. In this freewheeling reality: we are constantly in fear of the intelligence communities’ (IC) snooping and subversion, the threat of 0-day attacks, corporate co-option of internet commerce and employee surveillance, etc. However these are all basically architectural choices. Privacy protections can be embedded in the way protocols are approved as standards, subversion of agreed ethics can have sanctions attached to them, embedded nodes can be forced to integrate automatic update processes to improve our defenses against vulnerable internet of things devices, etc.

My biggest pet peeve is related to the emergence of connected unattended devices like your router, your fitbit, your smart weight scale, etc. Once these things are released into the market no one seems to take responsibility for updating the firmware and software to patch software related security issues. We should have rules that a) selling it means you are liable for any harm that comes from its connectivity functions b) updating the software to latest to solve latest 0-days is also your responsibility c) update systems on each of these devices should be uncompromisable and isolated. And many more along these lines

We live in a more complex world and new kinds of standards are required to make the internet work for all of us. We are way beyond “rough consensus and running code”. We need standards that are as ambitious and expansive as Asimov’s 3 laws of robotics, which doesn’t just describe what it takes to build a system but also articulates the limits of what it can be legitimately employed to do. Or else.

1
http://www.niso.org/about/documents/strategic_plan/strategic_dir_preview.pdf

Quick Godzilla 2014 review

Aaron Johnson #FAIL. Not a single great performance in the film for him. He just lay there, said his lines and tried to emote. And his voice is as annoying as it was in Kick Ass. Back to actor school Mon, and maybe get a college degree? Olsen was better but not by much. She wasn’t given much to work with but what she did get she did not make memorable; in fact it was actively #forgettable. Cranston was better but he was a vestigial crank whistleblower. Really not enough for his talents. And they wasted an opportunity with one of the better French actresses of the passing generation, Juliet Binoche! She had like 10 lines max?!!

So it came down to the monsters and they had precious little screen time fighting in murky darkness. And Godzilla seemed to lack strategy, dominance. He might well have lost, after all he lacked air power. That doesn’t seem right; we’re here for the powerful and intelligent dino lizard. A cross between Mohammed Ali and T-Rex.

A bit disappointing overall, unless you have pre-lowered expectations from the last American Godzilla movie. But I think each entry in the canon should stand on its own.

Will not age very well.

Rethinking Open Source security

By now you’ve been sufficiently terrorized by the Heartbleed bug in OpenSSL; a rotten bounds checking error in the C code for that security library that secured about 40% of the Internet. If you have not checked your servers, you should do so now here. I’m rocking mostly IIS in my private cloud so I’m mostly worry free.

The amazing thing about this bug is that even though it basically implies that there is a chance that almost everyone’s bank accounts, email accounts, and so on; are compromised (the bug is 2 years old; exploits have been in the wild since at least November 2013), the outcry has been pretty sedate. Not the media, nay that has been adequate. The outcry. You’ll understand this if you go back to say the outbreak of the Nimda worm (exploit in IIS). The hue and cry was just cacophonic.

It’s almost as if because it was an open source issue, the finger pointers are more restrained. Bruce Schneier, who I admire and respect was this astonishing mix of measure calm and alarm. I could imagine a much different posture if the exploit had been found in code in closed source software.

Part of this is tribal. By now it is received wisdom that open source is NOT bad for security. The latent ability to openly audit code, the reasoning goes, is good for rapidly fixing things as they emerge. And the ‘many eyes’ theory takes care of the velocity of emergence and subsequence fixing in the first place. There are reasoned arguments why this is not always true, but the feeling persists in the software engineering community. So imagine my surprise when I asked an ordinary citizen what they thought and they said, more or less “maybe open source is a bad way to do security.”

You have to understand, that this is near heretical at this stage in hacker culture. Open source is too big to be smeared so cavalierly. But I thought about it for a second: what if we’re exiting an age when depending on the many arguments for better open source security is no longer sufficient? Consider:

  • Reputation – anyone can join an open source software. A ditch digger in rural Idaho who taught himself programming or an agent of the NSA posing as a harmless student. There are no real checks on who contributes. This means that open source code bases are susceptible to social engineering if the contributor has malicious intent.
  • Device proliferation – open source is the go to foundation of the Internet of things – a trendy new word for what we used to call embedded software (it hardly matters that the things we’re embedding into are getting smaller). The problem with the IOT is that once it’s in the field, there are very diffuse responsibilities and incentives to update the software running on these devices. So there is an expanding risk of vulnerable and exploitable software even when security patches exist. Think your fridge, your router, your wristband, etc.
  • The Cloud – we live in very different times from 5 years ago. At that time, most consumer data lived on laptops, desktops and such. Yes, the security was shitty, but the pipes to get to their data could be tiny; the computer could be turned off – basically it sometimes wasn’t worth it. Well imagine if you got everyone to take all their gold bricks from under their mattresses and put it in a bank with just a rent a cop to watch it? Yes you could burgle a few homes expertly before, but now you just show up at the bank, know off the rent a cop and make off with an entire nation’s wealth in one smooth fell move. Well that is the potential security situation we are in with a bunch of consumer and corporate data moving into the cloud. Successful digital heists are that much more spectacular. See Target.
  • Human apathy and capability – the one thing I always thought was inane about the open source security trope was the “many eyes” theory. The fact is many FOSS projects can barely attract contributions when moderately popular. OpenSSL is very valuable piece of FOSS and yet the Heartbleed bug persisted for 2 years. How many of this kind of bugs are out there right now even when we fix Heartbleed? Suddenly “many eyes” doesn’t sound as comforting.
  • Human avarice – as FOSS underlies more and more of the economy, both state actors and gangsters have an incentive to mount an arms race on finding 0-day flaws. There are already million dollar companies who make it their mission to do this. Most flaws in a prior age were found by benign security researchers doing a public good with a certain kind of skill and toolset. Now all bets are off. I can bet there are people poring over key parts of internet software infrastructure in the public domain to find exploits. Why not?, they’re being paid handsomely for it. You can literally take all the FOSS in the world, rank it by criticality to internet safety and employ a team to go to town and read every line of code for exploits.

This is not to say that closed source is better security, although in this particular case I feel good about my decision to use IIS (had nothing to do with security at the time). It’s to say that open source security orthodoxy is bad. And every once in a while, the geek community has to look around at the world we live in, not the world we made our assumptions in; and adjust to that reality.