TSA rules on taking off your shoes at security checkpoints

October 14th, 2004

When we were flying back from Maui to LA, the screener asked me to take off my tennis shoes. I asked why, and he said “because they fit the profile”; when I asked what the profile was, he responded “shoes like those.”

I went ahead and took them off and put them through the machine even though just five days earlier no one at LAX had asked me to take them off. On the other side of the screening device, I asked a different screener why I had needed to take off my shoes. She said they fit the profile of a heel taller than 1” (which they didn’t). I mentioned I had not been asked to take them off at LAX and her response was that LAX should have made me take them off. I told her that I thought the most recent TSA directive on shoes was that passengers got to decide whether to take them off or not, and she told me I was wrong. She mentioned that I could refuse to take off my shoes if I wanted, but then I would be automatically sent to the special frisking area where they would make me take them off anyway.

Hmmm. Problem is, the TSA’s official site says that I was right: passengers are allowed to decide whether to take off their shoes. The 1” rule is apparently only a guideline concerning when TSA personnel should recommend that people take off their shoes. Granted, this policy is very hard to parse.

Rhetorical question: If the TSA cannot get its policies on shoe removal right, and train its people properly, how well do you think really important policies on detecting terrorists at checkpoints are being defined and implemented?

Performant

October 9th, 2004

Is “performant” a word? I came across it in an article about Microsoft Visual Studio .NET 2005:

C++ is the easiest language to use for native interop and is often the most performant.

Why Microsoft creates buggy software inefficiently

August 2nd, 2004

We’ve all heard that Microsoft software is buggy. Bill G. says this is just because so many people use it they find all the bugs and so they get more attention, plus there are those who just enjoy pissing on MS. But I just had the experience, for the first time in a decade, of programming for Windows. The bottom line is that the entire environment is so buggy, flaky, and poorly documented that it’s a miracle anyone can write a program for Windows which runs at all.

Bill says that the open-source model can’t work because there’s no economic incentive to produce solid software or support it. What I realized is that he has it exactly backwards. The MS software development model can’t produce good software because it’s corrupted by the need to get the next version of the product out the door. Good software periodically and suddenly requires being rearchitected, or “refactored”, as they say, at inopportune times from the standpoint of the CFO. With the MS model, there is never a reason to take the time to go back and build the software right.

The elegance of the MS architecture has gone steadily down since Windows 3.1, which although kind of funky, at least had a weird predictability and consistency about it. Now, there are layers upon layers of additional libraries and wrappers on top of it, each
documented more poorly than the last. The only way to use the MS docs is to search them using Google, and even then it is all too often the case that you just can’t find what you are looking for, or worse—it’s wrong.

Programming in the MS world uses the approach I call “throwing mud at the wall”. Basically, you throw mud at the wall and see what sticks. You can never figure out the best way or the right way to do something in advance so you just try all the ways and use the first one that works. It’s like playing “pin the tail on the donkey”.

A good architecture has the characteristic that it surprises you from time to time with the cool things you can do easily because of its superior design. I’ve yet to come across a single thing in the entire Windows architecture that gave me this feeling.

Just a couple of quick examples, all from the browser extension world, which is what I was working on.

  1. The MS docs refer to an API to manipulate your browser’s history list. They give the name of a header file you use to access the API. But this header file exists nowhere in the world, except in a non-MS version that you can find on the net that was reverse-engineered by some poor sap who had no choice.
  2. A key ATL library used for internet access is just missing the wide-character version of the interface needed to read a web page off the net—making the entire app fail to link. I finally found this info in the MS knowledge base—with no work-around given,
    of course.
  3. Using the technique suggested to take the user to a particular web page after running the installer—namely a one-line VB application—polluted the entire installer with “.NET-ness”, which persisted even when I removed the VB app, requiring me to rebuild the installer component from scratch.
  4. The HTML DOM API provides a W3C-compatible “text range” object to represent ranges of text within a web page. But it is so buggy that something as basic as moving and endpoint of a text range forward or backward by one character doesn’t even work. And operations on a text range corrupt the DOM.
  5. The DOM built by IE is not even well-formed to the extent it sometimes cannot be walked from start to end.
  6. The API uses multiple interfaces for the same thing—four different interfaces for windows, for example—and of course the documentation is structured so you can never find anything unless you knew what you were looking for before you started.

And so on, multiplied by ten or a hundred.

What’s surpising, then, is not that Microsoft’s software has as many bugs as it does but that it doesn’t have many, many more; not that their software is often late, sometimes by years, but that it gets released at all. And I suspect that the high levels of profitability deriving from Microsoft’s near-monopoly in many markets is hiding the fact that it is well behind even other commercial software companies in development productivity because of the abysmal state of its architecture

Another reason to live in West Hollywood

July 18th, 2004

A West Hollywood city ordinance overrides any no-pet clause in your apartment lease if you have HIV/AIDS.

Why do humans believe in religion?

July 10th, 2004

Humans have an built-in tendency to see “agents” behind phenomena. This is hard-wired evolutionarily into our brains, as a survival mechanism: our cavemen ancestors were able to deal with a predatory beast more effectively by imputing agency to it, assuming it had a “plan”, namely to try and eat them.

It is this same adaptation that then causes man to imagine supernatural agents behind the weather, victories over neighboring tribes, winning the lottery, or human life and death.

Furthermore, the human brain has developed evolutionarily to best remember differences and exceptions and oddities. So oddities such as agent-deities who are humans but can fly, or animals who can talk, are easily retained within our individual and cultural memories.

Such is the theory developed by Scott Atran in In Gods We Trust: The Evolutionary Landscape of Religion. I don’t recommend the book for a casual read; it’s pretty heavy going. But Atran has an incredibly detailed knowledge of both human religion and evolution, and the book is filled with great insights, if somewhat turgid in places.

I don’t believe Atran discusses my own theory about belief in the afterlife, which although commonly attributed to the human need to be comforted, is actually related to the rudimentary consciousness found even in animals that lets them imagine the existence of something that has gone out of their sight.

I personally would have preferred if Atran spent a bit more time looking at peak experiences and human growth patterns and telling us about their evolutionary bases.

In any case, this book provides a highly convincing explanation of why humans believe in religion. In his next book perhaps Atran can propose ways to wean our race off this illogical, counterproductive addiction.

Ajipon, famous ponzu brand

July 4th, 2004

Imagine living in Japan for 15 years and never having heard of “Ajipon”, the ubiquitous ponzu sauce—although I’m sure we had some in our kitchen, and I must have walked by it on the grocery store shelves hundreds of times.

According to the Ajipon web site put up by its manufacturer, Mitsukan, Ajipon was developed in 1964, back when ponzu was not a common household item. The Mitsukan president was having some mizutaki in a restaurant and vowed to bring the fabulous taste of the dipping sauce into the Japanese home. Ajipon was the result of three years of experimentation with different types of citrus and degrees of saltiness.

Ponzu itself is created by boiling mirin with katsuo-bushi and konbu and vinegar, then adding citrus juice. If you then add soy sauce, it becomes “ponzu shouyu”, although this could also be called just ponzu. Ponzu or Ajipon would most commonly be used as a dipping sauce for nabe dishes; mixed with grated daikon for yakizakana; or as a dressing for tataki.

And in modern cuisine? In the recipe “Oyster-leek Gratine with ponzu” Ming Tsai deglazes the pan where he sauteed the leeks with ponzu. A San Diego restaurant serves up ahi with a ponzu glaze. Another restaurant dresses pan-fried Escolar with ponzu. Shiro in Pasadena serves catfish with ponzu and cilantro. A cruise ship’s menu tries a ginger ponzu sauce on its grilled ahi. Sushi Masu in Westwood serves up monkfish liver with mountain caviar in ponzu sauce. Add olive oil and you have a ponzu vinaigrette. Geisha uses ponzu as a marinade (with coconut!) for its fluke dish.

Ponzu is the perfect marriage of the flavors of the paddy and the sea and the orchard, of the salty and the sweet and the tart.

Stamping out the loan-word disease in Japanese

June 29th, 2004

The “Foreign Loan-word Committee” has issued recommendations for replacing 33 common katakana-isms with “native” Japanese.

Thank God they backed off on some of their worst proposals, like replacing “online” with “kaisen-setsuzoku”.

Of their new proposals, I especially like “setsumei sekinin” for “accountability”. In other words, the Japanese view accountability as the question of who has to explain something.

A lot of the proposed replacements are to just use the obvious Japanese, such as “dougu” for “tool”. Ditto for replacing “stance” with “tachiba”, or “conference” with “kaigi”.

But that begs the question: why did people start using “tool” in the first place, when they already had “dougu”? That’s a critical question of linguistic philosophy which the grayhairs on the committee didn’t even try to answer. I know the answer. The centuries-old Chinese compounds have been rounded and smoothed like rocks in a river-bed by the forces of linguistic nature over time. The English words are young, agile, opinionated, angular, with a personality (make that PA-SONARITI). In that sense, they have a different semantic profile. Simply put, they mean something different. That’s why people started to use them and will continue to use them.

But what’s really weird is that what they’re proposing to replace the 30-year-old borrowings with are themselves borrowings into Japanese, just much older ones!

Distros and convos

June 10th, 2004

The Japanese shorten long English loanwords by the simplest of expedients—simply chopping off the last half of the word. So “convenience” becomes “conveni”. That always seemed kind of crude to me, albeit cute in a way.

But now I’ve noticed a trend in English to do the same thing (instead of, or in addition to, the old acronym approach). Two recent examples are the teen-age “convo”, for “conversation”, and the geekian “distro”, for “distribution” as in a Linux distribution. And of course there’s the old stand-by “combo”, corresponding to the Japanese “kombi”.

Why is it that in English we tend to want to end these words with an “o” sound?

Any more examples out there?

How George Tenet should have resigned

June 5th, 2004

Americans make fun of Japanese for the way their politicians and corporate executives resign to “take responsibility” at the slightest provocation, with all the grim faces and bowing at the inevitable press conference.

Then we have George Tenet, US Director of Central Intelligence, who resigned on Thursday; he and George Bush spent the entire day emphasizing that it wasn’t about taking responsibility.

Tenet is resigning for personal reasons; he wants to spend more time with his family, especially his high-school son (but did anyone ask the son if he wants to spend more time with his father?).

I don’t get this. Who is the President trying to protect with this charade? Didn’t George see the huge potential positive impact of just coming out and saying, “The CIA didn’t do its job. People need to be accountable. George Tenet was a fine public servant, and made great contributions to the CIA, but he led an agency which failed the nation at a critical time. We sat down and agreed it was time for a new start.”

That would have made the President appear decisive (not to mention honest), and Tenet responsible (and honorable).

These guys are so stupid they can’t even realize when taking responsibility would come out positive in the Rovian political calculus. They remain resolute in their refusal to ever take or assign blame about anything.

Neurobiology (II) — why people hoard

June 2nd, 2004

Now the New York Times reports that obsessive hoarders have decreased activity in the anterior cingulate, the “brain structure involved in decision making and problem solving, […as well as the] posterior cingulate, an area involved in spatial orientation, memory and emotion.” The theory is apparently that these poor guys worry about losing track of their stuff so they keep it piled up in the living room.