The Aresan Clan is published four times a week (Tue, Wed, Fri, Sun). You can see what's been written so far collected here. All posts will be posted under the Aresan Clan label. For summaries of the events so far, visit here. See my previous serial Vampire Wares collected here.

Wednesday, August 24, 2011

Just last week the West Memphis Three were released, finally bringing an end to a well-known miscarriage of justice, nearly two decades too late.

In 1994 18 year old Damien Echols, 17 year old Jessie Misskelley and 16 year old Jason Baldwin were convicted of the murder of three eight year old boys who had been found murdered in the Robin Hood Hills area in West Memphis, Arkansas some months earlier. The case got a lot of publicity because of an HBO documentary about it, Paradise Lost, and a number of celebrities taking up the case of the dubiously convicted boys. The attention only served to highlight the shoddy investigative practices of the police and the practically nonexistent evidence connecting them with the murder.

The prosecution's case mostly relied on the testimony of one of the boys, Jessie Miskelley, who claimed that he had witnessed the murders, though he hadn't participated. But his confession had a number of problems. For one, he claimed the whole killing occurred at the creek bed where the bodies were eventually discovered, despite that physical evidence at the scene, indicated that the boys were probably killed, or at least assaulted, elsewhere and then taken to the scene where they were dumped (there was almost a complete lack of blood at the scene). Secondly, Miskelley also said the boys were tied with brown rope, despite that they were tied with their own shoelaces. Thirdly, Miskelley initially claimed that the boys were killed on the morning of May 5, 1993, when the killing occured, despite that the three victims were still alive at the time, and had to have been killed some time in the evening. Miskelley changed his confession several times before he finally latched on to a time late enough for it to be possible.

The only other evidence the prosecution had was a knife, which they found deposited in a lake near Baldwin's residence, and some fibers found on the suspects' clothes that were at least similar to fibers from the clothes of the victims. However, the knife couldn't be connected with the killing, nor could it be shown to have belonged to any of the boys. And the fibers were similar to a great many common products, and thus could have easily come from any number of sources.

The police struggled to find anyone who would claim that the three boys actually were well acquainted, finally turning up one witnesss, despite that Miskelley really only knew the other two boys through school and wasn't friends with them. Additionally, they found a witness that could put at least one of the boys near the scene of the crime near after the time of the murder. And, to the benefit of the prosecution, some children even stepped forward claiming that the boys had confessed to the killings afterwards. All of these testimonies were riddled with problems and strained credibility, but they were persuasive to a jury eager to convict the boys.

But most of the case relied on the fact that Echols was a devotee of neo-paganism, being interested in the Wicca, and the other boys were at least tangentially connected with such practices. The police had initially believed that the case appeared strongly to indicate satanic ritual killing. The charge that this was part of a Satanic ritual caught on in the religiously conservative West Memphis, especially since this was a time when the moral panic over Satanic Ritual Abuse was still hot concern in some corners of the country. The whole panic over Satanic Ritual Abuse eventually died down as investigators started to realize that probably all reported cases of it were due to urban legend and hearsay or "memories" falsely recovered under hypnosis. In other words, despite that the prosecution even brought in a witness at the trial who claimed to be an expert on Satanic Ritual Abuse, the whole thing was hogwash and the expert was worse than ignorant.

Nonetheless, despite an absence of evidence, the boys were convicted, probably mostly due to their perceived association with satanism (they weren't satanists, but the prosecution went to great lengths to demonstrate this, with evidence such as their tendency to wear black and listen to heavy metal) and neo-paganism.

The case seems similar to other cases of high-profile killings where pressure from the public has led police to follow highly dubious evidence in order to get a conviction, no matter the strength of the evidence, such as in the cases of William Heirens, The Boston Strangler and the Monster of Florence.

Finally, in 2007 DNA evidence from the victims was failed to connect any of the boys with the crime and the case was scheduled to be retried in light of this new evidence. In light of such an upcoming trial, this new deal has been struck. The boys have pleaded to a lesser crime, which will get them released immediately, but doesn't exonerate them. The boys will surely continue to fight for complete exoneration, but at least they can now do so from outside of a jail cell. Though at this point it's hard to see how such a miscarriage of justice can be reversed: the boys have sat in jail for far too long, and the true murderer or murderers may no longer be discoverable.

Sunday, August 21, 2011

Strategic Pricing

I just finished reading Traffic by Tom Vanderbilt and he has an interesting discussion of free parking. On the one hand there's the idea that more parking encourages more driving (since more people will drive to an area where they have a better chance to find parking), but also, on the flip side, that a shortage of parking is an indicator of underpricing. Especially as I've been recently taking a few trips to Chicago by car, I've noticed that there are some parts of the city that are flush with parking, and others where it's incredibly scarce, and that there's a mix of free parking and pay parking, but all the street parking that costs, costs the same (so far as I saw). Another author (Donald Shoup) had suggested, as a rule of thumb, that parking should be priced such that you're always 85% full. Areas where there are chronic shortages of parking, should have their prices raised, and places with there is excess supply should have prices dropped or parking spaces eliminated. This also suggests that you adjust pricing for time of day, such that it's more expensive during high traffic times (for most places, during business hours on weekdays) and cheaper during low traffic times.

Now, while I was reading about this, I was waiting for him to apply the same idea to highway traffic. Sure enough, he did. This being a book about traffic, it would seem inevitable that he would talk about solutions to congestion and excessive traffic. Just building more roads is not the answer, since it always runs up against the "if you build it, they will come" phenomenon, whereby more people drive and those people drive more if they have less traffic to contend with (meaning new roads will almost always be filled to capacity and beyond). You've got to think of it as if providing free roads to drive on is like subsidizing driving, and providing more roads is subsidizing it more, and if you want more of something, the best way to do it is to subsidize it. Additionally, more roads just exacerbates the problem that most roads are underutilized during times of the day that are not rush hour (especially late at night and early in the morning).

The solution is "congestion pricing." You charge people for using a road during high traffic times, and perhaps charge them even more during really high traffic times. This forces people to make price/value calculations: "Is it worth it to drive now, or can I wait a few hours until it's cheaper?" Some people will decide it's worth it to go during high traffic times, some that it's worth it to go earlier or later, some that it's worth it to use another form of transportation. The ultimate result is that, if the pricing is done well, traffic will be lower and will be more spread out throughout the day.

This type of strategic pricing is used in other industries: airlines charge more for more popular times of day (early morning and late at night are usually the cheapest), more popular days of the week (Tue-Thur is usually the cheapest), more popular times of year (Thanksgiving and Christmas/New Years are super expensive), and charge usually less for early purchasing. But I think it's surprising that many other industries don't do it more. For example, though movie theaters have modest matinee discounts, they don't charge even more for high traffic Friday and Saturday evenings and don't lower prices more to fill up theaters during less popular times; not to mention lowering prices for less popular movies, and raising prices for more popular movies. I'm surprised how concert tickets sell out quickly, when the venues could've sold them for much higher prices, leaving tickets available even for people who buy them last minute and virtually eliminating any secondary market (ticket scalping). It's not a solution for every industry, but it should be used more.

Pricing can be a good way to adjust human behavior, and in fact really helps us make decisions, giving us a tangible prices with tangible numbers that can really help us evaluate whether something's worth it. Free parking genuinely costs money (since that space could be otherwise utilized) just as highways genuinely cost money to build. Additionally, more in demand times and places are genuinely more valuable to people, and thus would demand higher prices. Giving a tangible price to something and forcing them to pay it forces upon them decisions that help further a more efficient allocation of scarce goods.

Saturday, August 13, 2011

Spoilers

Wired points to a new study looking out how much people enjoy books when the ending twists are spoiled and when they're not spoiled and discovers that: "Spoilers Don’t Spoil Anything." Actually, that's not exactly what the study found. The study found that when comparing people's enjoyment of a story with or without a spoiler, on average, people tended to enjoy the spoiled stories more, in some cases significantly more. In a minority of cases, the subjects preferred the story unspoiled. In short, spoilers usually don't really spoil most types of stories, though they do spoil some.

To me this isn't really all that surprising, and I've discussed before the utility of "revealing surprises much earlier to create anticipation." The prototypical example is Romeo & Juliet, where the surprising twist ending is actually revealed in the Prologue. The anticipation of the tragic ending puts a pall over the whole play, giving it a certain ambiance, and as the tragic end approaches, the viewers begin feeling the sadness for the sad ending.

Nonetheless, there are times when a twist ending is best and a writer should try to withhold information. The question is when is a twist ending desirable and when not. Though I can't say I know the answer to this question, I have some thoughts.
1) It's probably worth it to hold out for a twist ending only if it's a really good twist, something really surprising. If the information isn't that earth-shattering, reveal it as soon as it's convenient.
2) If the audience can too easily guess the twist, it's not worth withholding.
3) A good twist should bring insight, not confusion. If things that didn't make sense, suddenly make sense after knowing the twist, then that's good. If things that previously made sense, suddenly don't make sense after knowing the twist, that's bad.

I'm sure there some other good rules of thumb, and there are exceptions to these rule. But they are a good starting point for thinking about when it's worth it to employ a twist ending. Though, most of the time it's probably best to reveal information when it's convenient throughout the course of the story.

Wednesday, August 10, 2011

Democracy in Degrees

Many governments and NGOs have focused on bringing democracy to developing countries on the belief that democracy is a road to prosperity. I can't say I disagree with the general idea, but such efforts have proven to be largely unsuccessful, I think, because such organizations confuse democracy with elections. "Democracy," of course, just means rule by the people, but it really comes in degrees. The more the people of a state are in control of the actions and destiny of that state, the more democratic it is. Just giving people an election, gives them some degree of democracy, but not necessarily very much, if the person or persons that are elected, use that power to aggrandize themselves or their friends, push through unpopular initiatives or in general abuse their power, that's not very democratic. Though a representative democracy is not the only way to empower a people, we can say that in such a system, when officials are elected to make decisions on behalf of the people, the country is democratic to the degree that such officials are beholden to the interests of the people. Elected officials are human, and they are going to be guided, to a greater or lesser degree, by self-interest, inevitably. When that self-interest leads them to push through unpopular laws and favor well-connected friends or special interests at the expense of the broader public, that wouldn't be described as terribly democratic.

Now, whether the more democratic a country is, the better or rather whether there is some optimal point where making a country more democratic would actually tend to make things worse, is an open question. It certainly can be said, though, that many countries could benefit from being more democratic. Perhaps even ours. If, for example, you have a survey that asks "Does the Federal Government have the consent of the governed" and only 17% say yes, you can imagine that many people are not thinking the government is too democratic.

Friday, August 5, 2011

A spoonful of Sugar

A new study in the Journal of Psychoactive drugs from Researchers at the University of California, Santa Cruz shows that the legalization of medical marijuana in California has led to a lot of people seeking marijuana prescriptions for an increasing variety of ailments (via Norml):
[R]elief of pain, spasms, headache, and anxiety, as well as to improve sleep and relaxation were the most common reasons patients cited for using medical marijuana.
...
Compared to earlier studies of medical marijuana patients, these data suggest that the patient population has evolved from mostly HIV/AIDS and cancer patients to a significantly more diverse array. ... This suggests that the patient population is likely to continue evolving as new patients and physicians discover the therapeutic uses of cannabis.
This reminds one of the rise of medical alcohol during prohibition. During the 19th century many doctors had believed that alcohol had a number of medicinal benefits. But as medicine advanced into the early 20th century, skepticism about the benefits of alcohol were on the rise, such that the AMA issued a statement discouraging the use of alcohol as a "therapeutic agent." But nothing like some good old prohibition to make doctors reconsider:
Alcohol was prescribed for a variety of ailments including anemia, high blood pressure, heart disease, typhoid, pneumonia, and tuberculosis. Physicians believed it stimulated digestion, conserved tissue, was helpful for the heart, and increased energy.
Medicinal alcohol grew popular enough such that eventually, "Over a million gallons were consumed per year through freely given prescriptions." Even though medical alcohol was confined to hard liquor, congress held hearings in 1921 considering whether it might be possible to permit medicinal beer.

In the case of both medicinal alcohol and marijuana, the distinction between medical and non-medical drugs is becoming blurred, and the authors of the study note that this is on the rise (via Jacob Sullum):
Prozac and other SSRI-type antidepressants, for example, are often prescribed for patients who do not meet DSM criteria for clinical depression but who simply feel better when taking it. Such "cosmetic psychopharmacology"...is likely to grow as new psychiatric medications come to market. The line between medical and nonmedical drug use has also been blurred by performance enhancing drugs such as steroids, so-called "smart drugs" that combine vitamins with psychoactive ingredients, and herbal remedies like mahuang (ephedra) available in health food stores.
This is a circumstance that may be on the rise, but it's hardly new. Beyond medicinal alcohol we can also consider the case of the vibrator, which was first introduced as a medical device in the late 19th century. Since doctors, during the Victorian period believed that the way to relieve the bogus ailment of "female hysteria" was through orgasm (they thought this hysteria was caused by the buildup of "female semen" which was apparently released during orgasm) this meant that the manual stimulation of a woman's nether region was an established medical treatment. Doctors apparently welcomed the introduction of the first steam-powered vibrator as something much easier than fingering their patients to orgasm. As more affordable models of vibrator were introduced for the consumer market, its popularity rose sharply. We can imagine that a great many women discovered that they quite liked taking their medicine (some medicines don't need a spoonful of sugar to go down; they are the spoonful of sugar) and vigorously applied themselves to treating their hysteria.

The common thread through all of these is a sanction against something that people want to do (be it alcohol consumption, marijuana consumption or masturbation) and a plausible medical reason to skirt that sanction. In such cases the distinction between medical and recreational use becomes unclear and people take advantage of it to do what they enjoy.

Thursday, August 4, 2011

Black Swan Laws

Wendy McElroy criticizes the newly emerging Caylee laws on the grounds that they're "Black Swan Laws." She defines a "Black Swan Law" as
a law created in response to a highly unrepresentative situation or legal case, which is typically rushed into effect and then used to regulate everyone's daily life.
These Caylee laws are laws that many states are trying to institute that basically require parents to notify authorities, within a certain short amount of time (usually about 24-48 hours) that any child of theirs below a certain age is missing. They also usually requires that if parents discover their child is dead, they notify authorities, again within a short amount of time (usually only a few hours).

The laws are entirely in response to the July 5 verdict of Casey Anthony. After Casey Anthony's daughter Caylee wasn't reported missing until a full month after the child had last been seen, and not even by Casey Anthony herself, but by her mother, Casey Anthony became suspect. Casey Anthony had in fact frequently lied on numerous occasions to keep her parents from knowing that their granddaughter was missing. When Caylee's remains were found, Casey Anthony was quickly arrested, charged with murder and put on trial.

Unfortunately, when the case went to trial, the prosecution simply wasn't able to build a strong enough case. Though most everyone was convinced the Casey Anthony had murdered her daughter simply because her actions were highly suspicious, the prosecution wasn't able to actually connect her with the murder and prove the case beyond a reasonable doubt. Casey Anthony was merely convicted on some minor counts of providing false information to police and was promptly released for time served and good behavior.

These has led to a nationwide call for such laws, with many petitions, and some laws starting to be drafted in many states. The problem with such Black Swan laws is that they're directed at unusual circumstances. What makes cases like Caylee Anthony's so newsworthy and thus well known is precisely that they are unique. Well-crafted laws are directed at genuine problems, that are persistent and widespread enough to justify the costs and unintended consequences of a law. To all appearances, parents failing to promptly report children missing is not a widespread problem. Thus, the law will impose significant costs and create numerous unintended consequences merely to prevent a situation that is fleetingly rare.

The Caylee's laws have had their share of critics. And these critiques note many of the problem cases that could potentially arise. In fact, the basic complaint is that these problem cases (such as a parent failing to notify authorities about a missing child because they assumed the child was staying with a friend) will be vastly more common than the actual cases the law is meant to address (parents killing their children). Not to mention that authorities may be overwhelmed by reports from parents reporting children gone for short stints, when the child has a habit of regularly disappearing for such short stints.

In fact, this particular Black Swan Law seems to be even worse than usual. Usually Black Swan Laws are supposed to try to prevent the particular event from recurring. For example, Megan's Law and the Amber Alert were passed on the theory that if the law had been in place, the tragedy would've never occurred. In this case, though, the logic of the law seems to be to prevent Casey Anthony from getting away without being punished. Certainly it wouldn't have prevented Caylee's death, since the reporting of a child missing would only take place after the deed is done. In fact, if you look through cases of parents killing their children, it's quite normal for parents who've just killed their children to report them missing. Parents do so in order to try and avoid suspicion. The fact that Casey Anthony failed to report Caylee as missing and tried to mislead her family into thinking her daughter wasn't missing is what makes this case unique. The Caylee laws instead seemed to be focused more on preventing Casey Anthony's short prison term. The theory seems to be that if a child's death is reported more quickly then investigators will be better able to gather forensic evidence (thus increasing the chances of finding the true killer), and if a parent fails to report it promptly, then they can punish them for that. Thus, the theory seems to be that if a Caylee law was in place, then Casey Anthony would've served more jail time. Thus, we seem to have a law aimed at an even more fleetingly rare case of a clearly negligent (if not outright murderous) mother getting away with a rather light prison sentence.

In short, if such Caylee laws get passed widely, we'll probably be hearing about all of the unintended costs and negative effects of these laws, just as we've heard about Megan's Law and other similar named laws. Perhaps people will realize the problems of such black swan laws and will realize that tragic events need some sort of cooling off period between the tragedies and the passage of hasty laws meant to address them.

Monday, August 1, 2011

Overly Broad Patents

I talked a few days ago about software patents, and that one of the problems with software patents is that they're frequently overly broad, giving patent-holders fairly excessive power to sue similar, independent inventions. It turns out that this may not be an at all new phenomenon.

Reading through Mark A. Lemley's paper "The Myth of the Sole Inventor," this problem seems to have existed since the beginning of the patent system, and fairly describes many famous inventions.

For one, Lemley notes that the invention of the steamboat was by no means the sole work of Robert Fulton, who ultimately received the patent. Instead:
While Robert Fulton is acknowledged by the popular imagination as the inventor of the steamboat, in fact the historical evidence suggests that many different people developed steamboats at about the same time. Indeed, in the aftermath of the Revolutionary War, when the
Articles of Confederation left patent rights to the states, different states issued patents to different claimants to the steamboat. The conflict between these inventors over patent rights issued by different states was one of the driving forces of assigning patent rights to the federal government in the U.S. constitution.

Fulton is remembered as the inventor of the steamboat primarily because he was successful in writing a broad patent to cover it, albeit one patented decades after other claimants.
A similar case can be seen with the telephone. Many separate inventors were working on transmitting sound via telegraph wire simultaneously, and most of the pieces necessary to make a telephone work were invented by others before Bell. We only remember Bell because he ultimately got the patent:
Bell‘s ultimate invention put together a transmitter, a fluctuating current, and a receiver. But so did others. Elisha Gray filed an application in the patent office on the same day as Bell, following on other Gray applications that predated Bell‘s, and their inventions were ultimately put into interference. The resulting case went to the United States Supreme Court, and the Court‘s opinion takes up an entire volume of U.S. Reports. Despite the fact that Gray‘s independent invention was different and in some ways better than Bell‘s, and despite the fact that Bell actually got his invention to work only in March 1876, well after his filing date, Bell won the case. The Court ruled for Bell despite the breadth of his patent claim, which covered any device "for transmitting vocal or other sounds ... by causing electrical undulations, similar in form to the vibrations of the air accompanying the said vocal or other sounds."
Similarly, George Seldon was granted an unfortunately broad patent in 1895 on the combination of an internal combustion engine with a four-wheel carriage, even though he didn't invent the internal combustion engine (merely created a lighter version of it) nor did he create the four wheel carriage, nor was he the first to combine the engine with a wheeled carriage. He made some significant design improvements, but others subsequently developed better designs, but he was nonetheless effective granted a patent on the automobile. It took an eight year long legal fight, for Henry Ford to finally have Seldon's patent invalidated in 1911.

The same goes with the invention of the airplane. Gliders had been developed before the wright brothers; the airfoiled wings they used were invented by Horatio Phillips; adding a tail for stability was developed by Alphonse Penaud.
The Wrights invented only a particular improvement to flying machines, albeit a critical one: they came up with a way of warping a wing to control the direction of flight while turning a rear rudder to counterbalance the effect of bending the wing, maintaining the stability of the plane. The Wrights solved the stability problem by having a single cable warp the wing and turn the rudder at the same time. Their patent, however, was not so limited, and they successfully asserted it against subsequent inventors such as Glenn Curtiss. Curtiss improved the design of the wing by using ailerons, movable portions of the wing that had been developed years before by a consortium of others, including Curtiss and Alexander Graham Bell. A frustrated Curtiss was reported to have said that the Wright brothers believed their patent was so broad that anyone who jumped up and down and flapped their arms infringed it.
In short, the Wrights made an important improvement on the existing flying machines, but were basically granted a patent that covered all engine-powered flying machines.

These examples don't exhaust the instances of overly broad patents, but the highlight the most famous examples, and show that issuing overly broad patents has long been an issue with patent law. One of the main flaws of a patent system is that it relies on human beings can err in ways that can be very expensive to correct.

Sunday, July 31, 2011

Parasites

I just finished reading Parasite Rex by Carl Zimmer, a fascinating book about parasites. It's only been fairly recently that there's been real serious study of parasites in biology. Add that to the inherent difficulty of studying parasites (since they live inside other living things) and we discover that we really don't know a lot about parasites overall.

What we do know, though, is very surprising. For one, there are tons of species of parasites. An estimated 75% of all species are parasites. All free living animals have to deal with parasites, and many animals have to deal with multiple parasites, if not whole arrays of parasites. We should mention that there's no reason to distinguish an infectious disease from a parasite: cold viruses and malaria protozoa are just as much parasites living at your expense as a tape worm or hook worm. Secondly, many parasites have very sophisticated life cycles, not uncommonly passing through two or three hosts over the course of a life and perhaps equally as many life stages (such as the way a butterfly goes through the stages of caterpillar, chrysalis, butterfly). There's a parasite, for example, the lancet liver fluke (Dicrocoelium dendriticum) that lives in sheep, but is passed to snails (through the cow dung) to ants (through a slime the snail secretes) back to cows (when the cows accidentally eat the ants while eating grass). Thirdly, many parasites are very good at manipulating their host for personal gain. This may be as simple as the cold virus causing us to sneeze (in the hope that we'll sneeze out the cold virus and it'll land on a new host) to parasites that change animal behavior or color to make them more vulnerable to prey. It'd be like if you were infested with a parasite that's life-cycle depended on being passed on to bears, and to further this purpose it gave infected persons a sudden urge to start a fistfight with a grizzly bear. Or (perhaps more realistically) it'd be as if a venereal disease, in order to spread itself more rapidly, made you much more horny and uninhibited in propositioning people for sex (some think Herpes might even do something like that, see Survival of the Sickest, pp 113-14). In the case of the lancet liver fluke that goes from sheep, to snails to ants, it causes infected ants to climb up to the top of a blade of grass and wait there all night (in the hope that the ant will be eaten by a cow that's feeding on grass).

There's an interesting theory out there that parasites are the reason for the development of sexual reproduction. As nice as sex is, it's not entirely obvious, from an evolutionary perspective, why it's advantageous. Without sexual reproduction, living things can reproduce more quickly and abundantly and without all the time and resources devoted to mating. Sexual reproduction has its advantages, but the sheer abundance of asexual species, such as bacteria, suggest that asexual reproduction might be better. On theory about how sexual reproduction evolved came about when a researcher, Curtis Lively, found a New Zealand snail, Potamopyrgus antipodarum, that was able to switch between sexual and asexual reproduction. He found that the snails that were more afflicted with parasites were using sexual reproduction, whereas the ones that were mostly parasite free, used asexual reproduction. Sexual reproduction allows the snails to change more rapidly, producing children that are more dissimilar to their parents and thus less likely to be afflicted by parasites. It may not necessarily be the case parasites are what pushed species in the past to adopt sexual reproduction; but it certainly is the case with these snails and may be the case with many other species.

Much of this shows that, our traditional hierarchies of food chains aren't entirely accurate. Preying on the predators at the top of the food chain are any number of parasites, which might be as ecologically important for culling the herd (in this case, reducing the number of predators) as the predators are for reducing overgrazing by herbivores they prey on. It's always interesting how even us humans, masters of the planet, are brought low by the ravages of disease and parasites.

Tuesday, July 26, 2011

Software Patents

There's an hour long segment from "This American Life" called "When Patents Attack." It focuses on the rampant problem of patent trolling in the world of software patents and how far these patents have deviated from working "To promote the Progress of Science and useful Arts" and are working more to promote business models entirely built on buying up overly broad patents and then extracting money from actual innovators by forcing them to pay licensing fees. The program appropriate connects this business model to a protection racket, an old mob scheme in which local mafia would extort fees from local businesses in order to protect them. And if they didn't pay up, the mob would damage their property, burn down their building, physically assault them or such.

The scary thing is how often patents are issued with substantial prior art that should negate them. A patent is not supposed to be issued when some innovation is either obvious to others in the field or if there is prior art, namely that someone already figured out how to do it. Nonetheless, redundant patents are issued all the time. One patent they highlighted on the show Patent number 5771354, which was issued to a fellow named Chris Crawford in 1998, is a broad patent that covers things like cloud drive, online sales and automatic software updates over the internet. It should've never been issued because there was considerable prior art; and it contains nothing non-obvious since any programmer could easily figure these things out. In fact, it overlaps with a great many other issued patents. According to a software search, there were 5,503 active patents at the time this patent was issued that covered the same innovations. And yet the patent was issued and is now valuable and is being used to extort money from companies that were easily able to come up with these innovations without even being aware of this patent's existence. And this is no anomaly. An estimated 30% of patents are issues for already patented inventions.

At the very least a reform of the system of issuing patents needs to be considered, such that patents that should be negated on grounds of prior art and obviousness aren't issued so frequently. Though it also tempts one to think that the idea of software patents are a bad idea to begin with, and that we should return to the old program where software was only covered by copyright.

Added: Kent Walker, Google’s Senior Vice President & General Counsel talked with TechCrunch yesterday about patents and the big Nortel Patent Auction Google was involved in. He sees patents system as failing to encourage innovation, noting:
When you see a lot of [Venture Capital] money flowing into the acquisition and holding of patents, it’s a problem. These are not companies doing new things, they’re buying them. You see hundreds of millions and billions of dollars flowing in to exploit others
...
An average patent examiner gets 15 to 20 hours per patent to see if it’s valid. It can take years to go back and correct mistakes.

Monday, July 25, 2011

Freighter Repo Man

The Telegraph has a story about a man who's found himself a very interesting line of work. He's actually involved in broad range of issues dealing with maritime shipping and has quite a few different interests. The author of the article, Richard Grant, writes, "He works as a maritime lawyer, a ship surveyor, an insurance adjuster, a pilot and flight instructor, a stuntman for films and television, a blues drummer in New Orleans bars, and a scattershot business entrepreneur."

But the most interesting part is his work as a sort of repo man, someone who steals back freighters that have been seized.
Hardberger is a 62-year-old adventurer from Louisiana who specialises in stealing back ships that have been fraudulently seized in corrupt ports, mostly in Latin America and the Caribbean.

He describes himself as a 'vessel repossession specialist’, a kind of maritime repo man who ghosts into tropical hellhole ports, outwits the guards and authorities, and ghosts out again with a 5,000- or 10,000-ton cargo ship, usually under cover of darkness and preferably during a heavy rainstorm.
For a measely $100,000, he'll swoop in and take back your seized freighter, whether it involves negotiations, brides or just outright stealing it from under the noses of the harbor guards and coast guard. The article details many of these adventures he's had over the years and is well worth reading.

He's even got a bad-ass name, Max Hardberger (if you put in your kid's name both "max" and "hard," it's doubtful you're expecting to raise a pushover).

Probably the least surprising part of the story is that "A Hollywood film about his escapades is planned, with The Good Pirate as a working title." That could potentially be very interesting if it does get made.

Saturday, July 23, 2011

End of the Dollar Coin Bonanza

Using dollar coins instead of paper coins saves money. Though coins are more expensive to produce (costing 8¢, versus 3.8¢ per dollar bill), since they last much longer and need to be replaced much less frequently the government could save money by producing only dollar coins. This has long been recognized and in 1997, congress passed a bill to start minting dollar coins again, creating the Sacagawea dollar, which was finally released and distributed beginning in 2000. Dollar coins, though, have never really caught on with the public, probably because they're much bulkier than paper money and additionally because the US Bureau of Engraving and Printing continues to produce the dollar bills. The government already tried dollar coins in 1979 with the Susan B Anthony coin, but it died quickly. And they continue to try, passing a 2005 law which created a new set of presidential dollar coins, which are being minted, but have largely gone uncirculated.

In an attempt to get the dollar coins more widely distributed, the US Mint instituted a program where individuals could buy rolls of the dollar coins at face value. In fact, the US Mint would even ship the coins to you for free and accepted many forms of payment.

People with credit cards that had rewards programs at some point realized that there was an opportunity in this. If you purchase, say, 1000 dollar coins for $1000 with your credit card, you could then take this $1000 in coins, deposit it in your bank account and then use the balance to pay off credit card bill. On net, you've neither lost nor gained any money, but, by using your credit card, you've added reward points, which you can accumulate. These rewards points you could use for free flights or free gifts or gift certificates or whatever.

The loser in all this was the US Mint, which had to pay the credit card fees and the shipping. Even worse, the coins weren't getting distributed, as was the point of this whole thing, since the banks would usually just end up shipping them back to the Federal Reserve after customers deposited them, contributing to an ever increasing cache of dollar coins in the Fed's vaults.

It couldn't last forever, though. The US Mint started to realize what was going on when the same people came back again and again to make large purchases of dollar coins. The Mint first started restricting the number of purchases that people could make, and then started getting in contact with customers to make sure the coins were for legitimate business purposes.

Such measures mitigated the problem, but it still continued. In fact, word was getting out (first on NPR's Planet Money on July 13, then on MSN Money on July 15), with the likely prospect of the problem only getting worse. Thus, beginning yesterday, the US Mint stopped accepting credit cards for buying dollar coins. You can still buy the dollar coins, but only with wire transfer, check or money order.

The lesson to be learned from this: if you find a really cool way of making money like this, don't tell anyone.

Inspiration vs Copyright Infringement

Michael Zhang asks the question "At What Point Does Inspiration Turn Into Copyright Infringement?" Namely, if an artist is inspired by another artist and uses similar elements or lifts pieces from another artists work, how similar do the works have to be for it to count as copyright infringement? For example, if you love the character of Jay Gatsby and wanted to create a character based on him, would it be copyright infringement if your character is a newly rich man in love with a woman from old money? what if he had fallen in love with her when he was just a poor soldier? what if he became rich working for the mob? what if his name is Jay? Where exactly do you draw the line?

The example that Zhang points to is not a good example. Janine Gordon is suing photographer Ryan McGinley claiming he stole photography ideas from her and that his works represent blatant copying, but the similarities between their photographs seem superficial.


Mike Masnick, I think rightly, points out, "Honestly, it's difficult for me to even say that McGinley's are 'inspired' by Gordon's, let alone copies."

José Freire, a gallery owner who's worked with McGinley responded in more detail, saying:
Among the artists named in reviews and essays about McGinley over the years one will find: Richard Avedon, Robert Mapplethorpe, Irving Penn, Man Ray, Alfred Steiglitz, Peter Hujar, Edward Weston, Catherine Opie, William Eggleston, Ansel Adams, and Dash Snow. Janine Gordon’s name has never once appeared as a comparison. These references, by numerous preeminent critics and curators, were not made to cast doubt on McGinley’s artistic process but rather to describe the status to which his work aspires.
...
Gordon’s claims for originality are extraordinary: she claims to have invented, among other things: visible grain and other errors in the image; the injection of the monochromatic into photography; the depiction of chaos; the use of smoke; the documentation of sub-cultures; and certain types of rudimentary composition (such as placing figures in the center of the page; or in a dynamic relationship to the edge of the image). She even appears to lay claim to “the kiss” as a “concept.”
...
[Gordon] states that there are 150 instances of “copyright violation”, however, these include numbers of images which are video stills taken by persons other than McGinley during extensive commercial shoots, pictures not even taken by McGinley, and images which resemble each other only if cropped; rotated; inverted, rendered in grayscale, or otherwise dramatically altered.

We are confident that Gordon’s case has absolutely no merit whatsoever and that her litigation will ultimately do more damage to herself than to McGinley.
That last point is important, not all the 150 examples of supposed infringement are actually photos taken by McGinley, including the fourth image embedded above, of the the three people laying on the bed. And, unsprurprisingly, Gordon has a history of such lawsuits.

Nonetheless, the question still stands. How different do two works of art, or even elements within a work of art, have to be, to be considered inspiration/homage/reference and how different to make it theft/infringement/derivative, even leaving the legal question aside.

The whole problem is exacerbated by the fact that there is no objective way of measuring difference or originality. People have very strong opinions about apparent rip-offs, but they're entirely subjective. It's not like the judge in this case can pull out some super secret originality yardstick from behind his bench and measure the respective differences and declare with certainty whether it is or is not above the statutory limit. The subjectiveness of such questions are fine for art history, since critics are free to squabble over such questions for generations on end. But it's a big problem for copyright law, since ultimately someone, whether it be judge or jury, is going to be put into the position of deciding the question, and their quite arbitrary and personal decision is going to be exalted to legal fact by the force of law.

Though I agree with Freire that the case is without merit, it's entirely possible that a sympathetic judge will rule against him. It might be fun to discuss and debate about such questions of originality, but it's deadly serious when potentially hundreds of thousands, if not millions of dollars ride on the answer.

Friday, July 22, 2011

How to write a review

Robert Pinsky talks about what makes a good book review by looking at a famously malicious review John Keats' Endymion received when it was published in 1818. The review was written with gleeful acerbity by John Wilson Croker. The review fails, not just because it savages a true poetic masterpiece, but because it fails to do what a review is supposed to do. Pinsky explains that there are three things a review should include:
1. The review must tell what the book is about.
2. The review must tell what the book's author says about that thing the book is about.
3. The review must tell what the reviewer thinks about what the book's author says about that thing the book is about.
It seems like a good way to do a review to me. I remember when I used to read Peter Travers' movie reviews in Rolling Stones, he once explained that the purpose of a movie review is to get people to the movies they're going to like. A review is not merely a platform for a reviewer's personal taste; a reviewer is not some sort of anointed cultural gatekeeper; a reviewer is just someone there to help potential customers.

Monday, July 18, 2011

Borders Closing

Borders is closing down. They've been in bankruptcy proceedings since February, and, after unsuccessful attempts to sell the company off, they've decided to liquidate their assets and close all their stores. The key lesson is that the world changes and even the most dominant company can be brought down by shifts in the market. Not only have people been getting more of their print books online, mostly via amazon, but they've also been buying more ebooks and fewer print books, such that ebook sales surpass print book sales (as I mentioned earlier).

As with all changes there'll be some things that will be lost, just as big advantages are gained. This, not infrequently, leads to lots of, usually unnecessary, worrying among some people about that which is lost. Michael J. De La Merced and Julie Bosman write:
The news exposed one of publishers’ deepest fears: that bookstores will go the way of the record store, leaving potential customers without the experience of stumbling upon a book and making an impulse purchase. In the most grim scenario, publishers have worried that without a clear place to browse for books, consumers could turn to one of the many other forms of entertainment available and leave books behind
.
The loss of serendipity resulting from stumbling upon a book you haven't heard of while browsing through the stacks seems rather silly to me, since amazon has expended great effort in trying to encourage this same type of serendipity. Amazon has its "Customer Who Bought this Item Also Bought," "Bestseller Rank" and "Customers Also Bought Items by" lists as well as its user-generated "Listmania!" and "So You'd Like to..." lists, which are all very useful for finding other books you haven't heard of. All it takes is a little browsing through these links to discover totally new titles of interest

Also the lack of "clear place to browse for books" seems equally silly. Just because people can't walk through a bookstore doesn't mean they'll stop reading books. In fact, one of the reasons that Borders is failing is that people have decided they prefer browsing through an online bookstore from the convenience of their computer. The publishers make it seem like bricks and mortar bookstores have been unwillingly taken from consumers, whereas the reality is that its the consumers who have largely taken themselves out of the bookstores. People browse online. They browse on amazon. They browse on ebooks stores. They get recommendations from friends. They read book reviews online. People browse differently and thy read differently. This doesn't mean there's no demand for books anymore. Just less demand for print books.

So, the world continues to change. Things that were once one way aren't so anymore, and I can't deny that some things will be missed. I have fond memories of browsing through Borders stores and reading stuff I pulled off the shelf. But I, like most other consumers, like getting my books online better.

Sunday, July 17, 2011

Tim Hartford Ted Talk

Tim Hartford gives a Ted talk discussing trial and error and the God Complex. He defines the god-complex as a belief that no matter how complex the situation or problem a person faces, that person believes they are infallibly right. Hartford says the antidote to this problem is trial and error.

The real advantage of trial and error really is that it allows us to solve problems that surpass our ability to understand them. I remember I was reading Ray Kurzweil a long time ago, and he argued at one point, basically, that evolution is a sort of simplistic intelligence. Because it works through trial and error, it can produce living things of intelligence that far surpass itself (like us humans, for example), though, because it is rather simple and crude, it moves slowly, very slowly. Kurzweil was, of course, talking about the singularity and making a point about how with much more sophisticated intelligences like humans and, some day, super-intelligent machines, we can accelerate this evolution. But the relevant insight for us here is that the great advantage of trial and error is that it's the means to successfully accomplish things that we only something much smarter than us could comprehend.

Hartford calls it an antidote to the God-complex, simply because it so elegantly shows us that we don't understand things that we think we understand. But he ends with the point that, though everyone knows that trial and error is great, it's criminally underutilized because the God-complex is so seductive. There are so many people that are so confident that they are right, and since people believe much more in people who are much more confident, such people have influence, all too much influence, I'm afraid.

Friday, July 15, 2011

How the internet affects our memory

New research shows that we tend to remember more poorly things that we think we can look up. As the New York Times describes it:
Dr. Sparrow and her collaborators, Daniel M. Wegner of Harvard and Jenny Liu of the University of Wisconsin, Madison, staged four different memory experiments. In one, participants typed 40 bits of trivia — for example, “an ostrich’s eye is bigger than its brain” — into a computer. Half of the subjects believed the information would be saved in the computer; the other half believed the items they typed would be erased.

The subjects were significantly more likely to remember information if they thought they would not be able to find it later. “Participants did not make the effort to remember when they thought they could later look up the trivia statement they had read,” the authors write.
In other words, we don't put as much effort into remembering things we don't think we have to remember. Does it show that our memories are poorer because of the internet? No. In fact, nowadays we're barraged with so much more information and trivia than we were in the past that learning what is important to actually retain in your noggin and what you can just look up later if you ever need it is a really important skill. In fact, the researchers note that people tend to remember better how to find the info or where it is stored, rather than the info itself.

Ronald Bailey at Reason magazine makes the appropriate connection to Plato's Phaedrus (274e-275b), where Socrates had criticized writing's effect on memory saying:
this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.

But Socrates has turned out to be wrong. Writing doesn't diminish our memory; it just changes the way we remember things. Just as writing was in Plato's day, the internet is sort of becoming a back up drive for our memory, a place to go to access stuff we don't have enough room for on our main drive. It supplements our memory. Wegner calls this type of thing "transactive memory," a place "where information is stored collectively outside the brain.

And there's nothing new here. We've always used other sources to supplement our memory, whether it's asking friends, looking it up in books or checking our notes. The difference is that now the internet is almost exclusively filling that role.

Wednesday, July 13, 2011

The Oxford Shakespeare Theory

I was just noticing that Roland Emmerich is working on a movie premised on the idea that Shakespeare's plays were actually written by Oxford Earl Edward de Vere (due out 2012, a trailer is already up). My first reaction was, "How is Roland Emerich going to incorporate the destruction of famous landmarks into a movie set in Elizabethan England?" But my second reaction was, "So, they're making a movie out of the ol' Oxford-Shakespeare theory. Interesting."

The idea that the William Shakespeare from Stratord-upon-Avon was not the true author of the plays attributed to him is an old theory. First, in the nineteenth century, it was proposed that the plays were written by Francis Bacon. This theory runs into the problem that Bacon's style is fairly distinct from Shakespeare's and Bacon is not known otherwise to have written any plays.

The theory that Christopher Marlowe wrote Shakespeare's plays was next proposed. This had more plausibility since Marlowe did write plays, was very good at writing plays, and certainly had more stylistic similarity to Shakespeare. But it ran into the problem that Marlowe was dead, dying in 1593, about twenty years before Shakespeare retired in 1613. No problem. These people claimed that Marlowed faked his own death. The problem this ran into is that his death, stabbed to death in a bar fight, doesn't exactly fit a plausible description of a faked death. Marlowe was a very public figure, a well-known playwright, who was killed in a very public place, a bar, and it was followed by a post-mortem and inquest. If you want to fake your own death, you're much better off doing it in a way that leaves very few witnesses and little evidence, like say dying in a fire or explosion or plane crash or drowning at sea. Heck even in this day and age you could probably get away with faking your own death as a drowning at sea (note to future self: do not attempt). Even though faking your death in 1593, with their rather primitive forensic science, would be a lot easier then than now, it's still hard to imagine how Marlowe could get away with it.

The currently most popular theory of Shakespearian alternative authorship is the Oxford theory, attributing authorship to Edward de Vere. This is more plausible since de Vere was known to be a celebrated poet and playwright in his day, was a patron of the theater and survived until 1604, which means that we only have to assume that some of de Vere's works were performed posthumously, which is possible.

On the other hand, we should note that most Shakespeare scholars are Stratfordians, that is to say that they believe that the plays of Shakespeare were written by the William Shakespeare from Stratford-upon-Avon, not by Edward de Vere or Bacon or Marlowe or anyone else. They believe this for a number of reasons based on very good evidence. For one, there is the simple and obvious one: the plays were, in their day, widely attributed to Shakespeare. The facts that everyone said the plays were written by Shakespeare and that all of the (admittedly unauthorized) publications of the plays that name an author attribute them to Shakespeare are pretty strong evidence. Admittedly, it's possible that there was some sort of clandestine intrigue behind the scenes to obfuscate authorship, but in the absence of evidence of such intrigue, it's best not to assume that everyone was being duped. Additionally, we have good evidence that William Shakespeare of Stratford was a real person, which makes one wonder why de Vere (or one of the other supposed authors) attributed their plays to a real person, a minor actor in an acting company, instead of just making up a a pseudonym like "Eddy Veretti" or "Redox Fordbridge" or something.

Also, most scholars reject the argument, which is behind all the alternative authorship theories, that, since education wasn't as widespread then and Shakespeare wasn't from the gentry that could afford high quality education and access to books, Shakespeare simply wasn't well-educated or cultured enough to have written such plays. The truth is that Shakespeare was the son of a prominent merchant and had access to a rigorous grammar school education and certainly became well-connected with the English aristocracy as he became more prominent. Not to mention the fact that most of Shakespeare's plays are adaptations, not original works, meaning a lot of the details that Shakespeare was supposedly not able to know about, come directly from the original works he adapted. Additionally, we only have a small sliver of the plays written during Shakespeare's time, meaning that literary allusions that we now assume to be only possible for someone well-educated, may have in fact between quite commonplace in the theater community at the time. In fact, some Cambridge students, in 1601, mocked the university-trained playwrights for over-using classical allusion, and noted how Shakespeare, not university-educated, was fortunately clear of that vice (quoted here).

When I was a young English major pursuing my undergraduate education, I too toyed with the idea of alternative Shakespeare authorship, first with the Marlowe theory and later with the Oxford theory. But ultimately I dropped them because there were a couple of problems with the theories I couldn't reconcile. For one, Shakespeare became extremely wealthy during his career. He was an actor, but reportedly a relatively minor and not particularly celebrated actor. It just didn't seem plausible that a minor actor could accumulate wealth enough to become, for example, a part owner of the Globe theater.

Even more implausible for me was the idea that a prominent and dignified courtier could write such a bloody play like Titus Andronicus. In the play, not only is Titus' daughter raped and has her hands and tongue cut off, but also Tamora's sons are killed then baked into a pie and fed to her. Trying to imagine a stately Elizabethan aristocrat writing such stories is really difficult (there are authorship questions surrounding Titus Andronicus, but these don't really change things since most scholars believe Shakespeare wrote all of it or co-authored it and was still the author of these famous bloody scenes)

That being said, though I think the Oxford-Shakespeare theory is wrong, it still is an interesting and tantalizing theory. So, making a movie based on it may not be a bad idea, and it could turn out to be a good movie. It's just to say that Hollywood sort of has an unfaithful relationship with historical accuracy, and this movie will probably be no exception.

Tuesday, July 12, 2011

Unrealistic fantasies

A while ago I posted about how romance novels seem to be for women what porn is for men. So, it's interesting to see, at around the same time, a study by some authors that claim that romance novels lead to sexual health problems and a rant by Naomi Wolf claiming that porn leads men to have mental health problems, such as porn addiction and a propensity for extreme sex.

Both of the articles present the veneer of science credibility, but neither appears to have much weight to them. Mind Hacks takes apart Wolf's argument pretty succinctly, showing how she misuses some fact and basically doesn't understand the neurochemistry she presents in the article. The article about romance novels, similarly, doesn't seem to present any evidence of its conclusion. The article basically looks at romance novels, sees their content and pretty much says, "if women take these stories as realistic expectation for what to expect from real-world romances, that'll lead to problems," which is tantamount to saying, "If young kids read Harry Potter and start to expect that they're going to very soon discover that they're wizards capable of magic, that'll lead to problems." Nowhere does the article actually prove that women pick up romance novels thinking they're going to be education and informative.

So, I guess, maybe, porn and romance novels aren't that bad after all. Oh well. I guess we can go back to letting people occasionally and temporarily escape into their unrealistic fantasies.

Sunday, July 10, 2011

Everything is a remix

I also wanted to mention that the third part of Kirby Ferguson's series "Everything is a Remix," is up and is totally worth watching. The "Everything is a remix" series overall is quite good, and this installment is no exception. The basic premise is that all innovation and invention in art, science and technology is really just remixing. Ferguson defines remixing as: "To combine or edit existing materials to produce something new," and goes on how to explain how music, film, and (in this installment) invention, are just remixes.

In short, what makes new innovations new is really just that they take old things and combine them in new ways.

I remember reading a critique that Jacques Derrida made of Claude Levi-Strauss. Levi-Strauss had said that his methodology as an archaeologist was bricolage, namely taking existing and available tools and using them for new purposes. The opposite would be an engineer, who constructs the proper tool for the proper purpose. Derrida's critique was that all discourse is bricolage. He made this point by pointing to the fact that a new thinker simply couldn't reconstruct "language, syntax, and lexicon" from scratch. In short, all intellectual discourse, that is to say all intellectual history, is bricollage because the thinkers, the philosophers, scientists, historians, economists and so on, are taking language and ideas and trying to use them to to describe new ideas that the language and ideas weren't specifically designed for.

Or, to put Derrida's point in the terms that Ferguson co-opted from music to describe the history of innovation: all intellectual history, including all philosophy, is a remix.

Germinating good ideas

Been reading Steven Johnson's Where Good Ideas Come From. The basic premise of the book is that good ideas really involve putting together and integrating existing ideas. Thus, the way to promote good idea is to permit openness and connectivity, so that people can take other people's good ideas (or nascent ideas) and pull them together to create new good ideas.

He talks for one about how many good ideas come from talking with and sharing with other people. We have this perception that good ideas come from lone geniuses dreaming up brilliant pieces of insight in profound "Eureka!" moments while meditating alone. The truth is that most ideas come from people talking and collaborating. He notes that in research labs, most of the big breakthroughs actually come about through staff meetings and presentations when people share and critique their discoveries.

He also notes that sudden flashes of insight are not the norm. We may perceive ideas as coming to us of a sudden like a bolt of lightning, but the truth is that good ideas germinate slowly. What we perceive as Eureka moments are just one salient step along a long process of careful consideration. Since good ideas take a lot of time and germination, it's really useful to write down our thoughts. A good idea doesn't leap fully formed from your head, but needs to be germinated. And to germinate a good idea you need to remember it, so that you can return to it and re-return to it so you can add to it and refine it. This is the reason it was extremely common for great thinkers from the 17th century forward to take voluminous notes, filling notebooks with ideas, quotes, scrap of thought and experiences. These notebooks would facilitate the germination as these thinkers would return to their notebooks and rethink the information that seemed most worth notice.

As I'm working on my dissertation, which has inevitably taken me deep into Nietzsche's voluminous notebooks, such insights certainly ring true with my experience. Not only did he take voluminous notes, but in those notebooks are the initial insights that would ultimately lead to his more famous ideas. The Revaluation of All Values, which I am in particular studying, began as a crude idea when Nietzsche was just a young professor fresh out of college, but didn't really become the fully fledged idea that you read about in philosophy textbooks until well over ten years later.

What is also interesting about Johnson's insights is that the notebook, at least for some people, has been replaced by something arguably better: the blog. Though people do use blogs for different things, for many people it's like an open notebook, where you can link to stories and ideas you like, engage in debates and record your thoughts and experiences, just like people would do with their notebooks in the past. Except it's better than the notebook because it has the quality of openness and sharing that the standard closed notebook lacks. Thus, it can be used to germinate ideas for the author of the blog, as well as share those germinal ideas with other, potentially benefitting them.

Friday, July 8, 2011

Suitable

In Oak Park, MI they have a law that says that front yards must have "suitable, live, plant material." A woman decided to plant a vegetable garden in her front yard, but a city planner, Kevin Rulkowski subsequently decided it violated that law. Clearly it wasn't the "live, plant material part," since her vegetable garden is unambiguously that. The problem was with the ever-vague and subjective "suitable."

Of course what is "suitable" is entirely up to personal opinion, but Rulkowki decided to justify his decision by claiming:
If you look at the definition of what suitable is in Webster's dictionary, it will say common. So, if you look around and you look in any other community, what's common to a front yard is a nice, grass yard with beautiful trees and bushes and flowers
Since I was pretty sure that there is no connection between "suitable" and "common" in the english language, I went to my dictionary, and sure enough it uses words like "right" and "appropriate" but not "common." Just to check, I went to Merriam-Webster's online dictionary for a definition of "suitable," and again found words like "fitting" and "proper," but not a whiff of "common" there either.

Unsurprisingly, I'm not the first person to go to Merriam-Webster to confirm Rulkowski's mistake. It's actually led to a number of comments, 19 at the present count. My favorite is, "Suggestion for new antonym: Rulkowski."

An even better one can be found on the commentary to this story at The Agitator, where commenter "Jeff" says: "Walking through my little city, I occasionally see gardening boxes in a front yard. It always makes me smile, as it seems like a good use of space. Dare I say suitable?"

I would speculate that perhaps Rulkowski's dictionary is broken, though I suspect the problem is user errror.

Tuesday, June 7, 2011

When you give people freedom, they make surprising choices

Ezra Klein addresses the issue of physician assisted suicide. He is reluctant to accept the legality of it, and he gives some reasons worth considering. He quotes Ezekiel Emanuel who says:
Broad legalization of physician-assisted suicide and euthanasia would have the paradoxical effect of making patients seem to be responsible for their own suffering
[…]
Rather than being seen primarily as the victims of pain and suffering caused by disease, patients would be seen as having the power to end their suffering by agreeing to an injection or taking some pills; refusing would mean that living through the pain was the patient’s decision, the patient’s responsibility.
Klein sums up the position, which I think really gets to the heart of his concern: "I do buy into Emanuel’s concern that it’ll give the people around them too much choice, and that the long-term consequences of that are unsettlingly unpredictable."

Klein is saying that when you give people freedom, they make surprising and unpredictable decisions, which sometimes are bad decisions. I on the other hand think physician-assisted suicide should be legal. Suicide, for one, should not be restricted because it's your body and your life and you thereby should make the decisions about it. And a physician should be permitted to help, since they can help make the suicide less unpleasant. But, still, at the end of the day you have to have faith that when given freedom people will make the right decisions. Certainly, people will make poor decisions sometimes, but I don't see it as plausible that laws or lawmakers will be able to make consistently better decisions for them. I personally don't think that if physician-assisted suicide becomes legalized it will be anything but a grave decision, but it is doubtless that the results will be unpredictable.

Monday, June 6, 2011

Tennis is better at distinguishing the better from the lesser

The French Open just finished up yesterday with perhaps the two best men's singles players duking it out for the final: Rafael Nadal (ranked #1 right now) and Roger Federer (ranked #3 right now). Unsurprisingly, victory went to Nadal, who's now won 6 of the last 7 French opens and overall dominates on clay courts, winning 227 out of 244 matches played on clay, an astounding 93%. These are two very closely matched top players, but its quite clear that Nadal was the champion, winning 7-5, 7-6, 5-7, 6-1. In fact, one of the nice things about tennis is its ability to really sort out even small differences in player ability.

It does this because a full tennis match includes a lot of points. At the bare minimum, a five-set match will include 72 points, but most games include more than twice that many and last usually 2-3 hours. And that's not to mention that these individual points involve, on average several passes over the next and several strokes by both players. What this means is that the probability that one or other player will edge ahead simply from luck alone is very small. Lucky points do happen: player might hit a ball barely in or barely out or catch the net just right to catch his opponent off guard, or translate a wild, reaching shot into a winner or benefit from a bad call. But over the course of a match, involving over a hundred points and 100s of strokes, the probability that one player will significantly get more lucky points than the other is vanishingly small. The only time it's likely to happen is if the game is so close that the players are practically neck and neck. In such cases, a little bit of luck can be just enough to push one player ahead. But in most cases, that little bit of luck is not enough to make any difference. It would take a whole lot of lucky point for a weaker player to be able to edge ahead. It's reminiscent of that quote from Woody Allen's Match Point, when Tom told Chris, the former pro tennis player that he'd seen him play against top players and he held his own for a while. Chris responds: "But as the game goes on, you see how really good they are." In other words, luck or momentum might be on side for a little while, but in the end, their superior skill wins out.

Such is the case with all high-scoring games. In basketball, for example, the typical game involves over 150 points and involves a lot of changes of possession and attempted shots, leading to games that are also seldom decided by luck. When you look at the final score of a basketball game, or especially of a tennis match, you can pretty clearly see how close and how hard fought the game is. A tennis match that is won in straight sets 6-1, 6-3, 6-2 is a decisive victory, whereas a hard fought 7-5, 6-7, 6-4, 4-6, 7-6 is obviously a close match.

This is in stark contrast to low scoring games like hockey or, even worse, soccer. As much as I enjoy watching soccer, it can be frustrating to watch games turn on lucky goals or bad calls. I remember watching in frustration during the first round of the 2010 World Cup when the US was prevented from winning their game against Slovenia after a ref disallowed a clear goal. The call was so poor that the ref was actually sent home and didn't officiate any more World Cup games, but that one call was enough to transform an American victory into a tie. On the other hand, it might have been karma after the previous game against England when the US was handed a tie after the English keeper so poorly handled an easy save that it resulted in a goal for the US. This one mistake changed a loss into a tie for the US. Though better teams do tend to win in soccer (the World Cup was won by Spain, who was the favorite going in), luck plays a much bigger factor. Consider that 2 out of the 16 games of the knockout stage in that World Cup were decided by penalty shoot outs. Since FIFA started using penalty shootouts in 1966 there have been 12 world cups, two of which (1994 and 2006) included penalty shootouts in the final game, with an additional three that were decided in added extra time. Such games are close enough that they can go either way, and they occur a large percentage of the time.

The difference between tennis and soccer on this issue is one of degree: in both games better teams tend to win, but in soccer luck plays a larger role. I enjoy watching both games, but I really like the feature of tennis that players fight for many small incremental gains which add up to victory. It results in a better measure of relative ability and means that you can be assured that when a player wins, they really deserve it.

Sunday, June 5, 2011

More on Peter Thiel

More commentary on Peter Theiel's fellowship, which I mentioned before. Naomi Schaefer Riley writes in the Washington Post:
Employers may decide that there are better ways to get high school students ready for careers. What if they returned to the idea of apprenticeship, not just for shoemakers and plumbers but for white-collar jobs? College as a sorting process for talent or a way to babysit 18-year-olds is not very efficient for anyone involved. Would students rather show their SAT scores to companies and then apply for training positions where they can learn the skills they need to be successful? Maybe the companies could throw in some liberal arts courses along the way. At least they would pick the most important ones and require that students put in some serious effort. Even a 40-hour workweek would be a step up from what many students are asked to do now.
I think this is something that should go on more often. If more businesses offered genuinely promising students who could easily go to college the option of effectively skipping college and getting directly drafted into the pros, students could save tens of thousands of dollars. College is a good idea for some people, but for others, it's just an expensive piece of paper necessary for career advancement. College is a great experience, but at the prices being offered, it's rather burdensome to spend thirty years paying off the cost of a four year long party.

Saturday, June 4, 2011

Dual n-back training

Dual N-Back training has been shown to improve fluid intelligence. N-back training is a memory game where they basically keep feeding you info, for example, they show you a succession of spatial position (or the number of a certain object or a tone) and you have to recall what position you saw n number of steps ago. For example, you could have 2-back training, in which you're given a 3x3 grid. For the first step you're shown a shape in the top-left, then a shape in the bottom-center, then a shape in the right and you have to remember the position you saw two steps before (top-left, in this case) then they show you left and you have to again remember the one two steps before (bottom-center, this time) and then they show you top-center and you again have to remember the one two before (right), and so on. You can increase the number of steps back so that you have to recall 3 back, 4 back, even up to 10 back. Dual n-back training means that you're doing two different memory tasks at once. For example, as at this online app, you simultaneously are tested on spatial and auditory memory.

What it does is train your working memory, which is the memory you have to retain and use information. It's the using part that's most important, since orking memory is not just short-term memory, but usable memory. Fluid intelligence is your ability to solve novel problems, use logic and reasoning, and recognize patterns. Improving working improves fluid intelligence because your brain is able to use more information to reason with and think through a problem or recognize a patter. In short, you're building your conclusions on a wider base of knowledge.

Interestingly,
They also found that [dual n-back] training made children less likely to be fooled by tempting, but incorrect, information.
"Psychologically, training made them more conservative," Jonides said.
I don't think "conservative" is the best word here. I'd say the better word would be "skeptical" or "incredulous." In other words, children are more reluctant to jump to conclusions the broader a base of working memory they're using.

Friday, June 3, 2011

If brevity is the soul of wit then long drawn-out stories are the soul of arthouse pretension

Dan Kois in the New York Times complains of experiencing a certain high culture fatigue. He's a movie reviewer and culture critic and he notes that whereas he's long succeeded in force feeding himself slow-moving boring films and television, he's finally reached a point where's he's tired of it and is constantly nagged, while watching this fare, by thoughts that he's got better things to do. Though I can definitely sympathize with him (I went through a many-year period of cultural elitism, watching primarily arthouse films and listening to decidedly modern classical and avant-garde jazz, and I, like him, eventually got tired of digesting this type of art), unlike him, I don't feel guilty about failing to appreciate this type of slow boring film because I don't retain the same high opinion of it as he. In fact, part of what inspired my change in cultural diet is a well-founded skepticism of the supposed highness of this "high art."

It's interesting that Kois starts off his article by comparing his experience to his six-year old daughter watching a fast-paced, densely-packed kids show, Phineas and Ferb. His daughter struggles with the show because it moves too quickly, contains references she doesn't get and has plots that (at least for a six year old) are dizzyingly complex. The problem, though, is that Kois tries to say the experience of his six-year old watching this show is like him watching a slow-paced and meditative film like Solaris and Meek's Cutoff. But the comparison makes no sense. The way he describes Meek's Cutoff (I haven't personally seen it), it sounds like a movie that in no way approaches the type of challenges that Phineas and Ferb presents his daughter. The plot is simplistic; the characters are sparse; one doesn't have to pay close attention to every moment to make sure one doesn't miss something because there's not a lot going on. In fact, it's sounds like his daughter would probably have a lot less trouble following the plot of Meek's Cutoff than Phineas and Ferb (ignoring the fact that the daughter wouldn't get many of the adult themes and relationships and would have no tolerance for the ponderous pace). Solaris (which I have seen) is similar: not a lot happens, there aren't many characters, there's not much to digest. In short, one other way of describing slow-paced and meditative movies is: simplistic.

I would never want to say that the only measure of a movie is complexity, since certainly there can be quality movies that are very simple and slow0paced (I, for one, really like Days of Heaven), but I think it's fair to say complexity and sophistication are virtues of movies and storytelling in general. If we compare these slow-paced arthouse movies to something much more widely praised and fast-paced like the recent The Dark Knight, we see a profound difference in sophistication. The Dark Knight occupies a complex and detailed world, filled with a large cast of characters connected in a dense web of interrelations, and involves a plot which is intricate and complex. In short, it demands a lot of the viewer to take it all in. On an intellectual level this movie is much more engaging and challenging; it requires close attention; it rewards multiple viewings; it has a lot going on. The blunt fact is that The Dark Knight is a much more intellectual movie than Solaris or Meek's Cutoff. Many people may view it as just passive mindless popcorn fare, but this is a mistake. Many of us, including many movie critics really mistake the feeling of mental effort for actual mental effort. Because it's so much more effort to sit through a dull movie people think that it's genuine intellectual effort. But actually the effort that is going on is our brain trying to prevent itself from wandering off because it's so little engaged and seeks to occupy itself with something more engaging. The Dark Knight in contrast seems easy to digest only because the exciting fast-paced nature keeps us glued to the screen such that we more effortlessly absorb all the relevant intricacies. In fact, these deep intellectual engagement is one of the pleasures of complex movies like The Dark Knight, City of God, The Big Sleep, Memento, The Usual Suspects, L.A. Confidential ...

This explains why the vast majority of high art works don't really tend to fare well in the long run. Most people have a really distorted perception of art in the past. Because we now consider classic works to be high art, it's assumed they were always high art. This is definitely not the case. To start with the most obvious example, Shakespeare was not considered high art by his contemporaries. The intellectual elite, the royal courtiers, were primarily engaged in what was considered the more dignified literary pursuit, writing poetry. Theater was considered more entertainment for the masses, and, admittedly, as far as we can tell, most of what was written for the stage in those days (the vast majority of which doesn't survive) was probably pretty horrible and worthy of turning up one's nose. But because only the best of it survives and that theater is mostly viewed culturally prestitigious, we read that perception back into history, forgetting that Shakespeare and contemporaries like Marlowe, Johnson, Webster and Kyd, to people in those days were more like Hitchcock, Spielberg and Nolan in our own day. The elite literature of the Elizabethan era has by no means fallen into obscurity, and the poetry of the likes of Sydney and Spencer is still appreciated, but at the end of the day these high brow artists are left in the shadow of the great low brow Shakespeare. And it's not like Shakespeare is some weird abberation. It's primarily the norm (let's just start listing: Dickens, Twain, Chaucer, Homer, Balzac, Gullivers Travels, Robinson Crusoe, Boccaccio, Ovid, Aristophanes, and on and on).

The trend is reflected in television, with rather sophisticated and complex television shows doing fabulously well. No one would describe Twin Peaks, 24, The Wire, Lost, Mad Men or many other shows as slow or meditative. Their large casts of characters and intricate plots remind one more of the complex worlds of those grand serial novels of the likes of Dickens and Tolstoy. They ask much more of the viewer but don't feel difficult because the give the reader more and are very engaging.

It's also under-appreciated what labors these big budget fast-paced films are. Those big budgets are going towards pizza and doughnuts; they're going towards large masses of skilled professionals in everything from cinematography, to casting, to special effects. Not to mention that the pace of a scene tends to be inversely proportional to the effort expended in making it. Fast paced action scenes, because of the superabundance of cuts and special effects, tend to be ponderous to produce, whereas nice, quiet talking scenes tend to be much easier and quicker.

Kois makes the comparison of these slow art house films to eating vegetable, which I think is an unintentionally apt metaphor. Vegetables as a food, we must admit are of limited nutritive value and caloric content; they serve a useful dietary purpose, but you certainly couldn't live off of vegetables alone. The real meat and potatoes of a film-goers diet, though, are the truly, largely mainstream, big budget films, the movies that tend to endure and remain popular for decades to come.

What this is not to say is that popular mainstream movies are necessarily better or that complex movies are necessarily better. Truth be told, most popular film is entirely forgettable. I picked The Dark Knight for a very good reason – because it stands out among popular films. Truth is that, for example, of the hundreds of movies made in the 40s, only a handful of these movies are still watched and the same will happen to the vast majority of films made these days – they will be lost to obscurity. What I am saying is that the high-art/low-art decision is overly simplistic. Manohla Dargis and A. O. Scott responded to Kois "In Defense of the Slow and the Boring," but their response is overly simplistic, criticizing summer movies of being boring since they're frequently very conventional and cliche. But this hardly vindicates the slow and boring, which just manages to be boring in a different way. In the long run, the art work that persists as great is generally those works that can combine a certain popular appeal with genuine originality and sophistication.

Thursday, June 2, 2011

Effectiveness Research

Bryan Caplan quotes with approval the NY Times on using effectiveness research to improve medicare treatment:
The British control costs in part by having the will to empower a hard-nosed agency, the National Institute for Health and Clinical Experience (N.I.C.E.), to study treatments and declare some ineffective.
[...]
Even better, use clinical evidence evaluations of the British Medical Journal. They've classified more than 3,000 treatments as either unknown effectiveness (51 percent), beneficial (11 percent), likely to be beneficial (23 percent), trade-off between benefits and harms (7 percent), unlikely to be beneficial (5 percent) and likely to be ineffective or harmful (3 percent).
Of course, this puts a lot of faith in the accuracy and reliability of such research. John P. A. Ioannidis notes recently in Scientific American of "An Epidemic of False Claims" in medical research:
False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine. Many studies that claim some drug or treatment is beneficial have turned out not to be true. We need only look to conflicting findings about beta-carotene, vitamin E, hormone treatments, Vioxx and Avandia. Even when effects are genuine, their true magnitude is often smaller than originally claimed.
The Atlantic did a long article featuring Ioannidis and his research a few months ago, stating:
He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed
The problems certainly aren't intractable, and Ioannidis has been working on developing better methods for evaluating the quality of research. But the blunt fact is that good medical research is hard. It requires long term studies with lots of participants, which is not only super expensive, but also super time-consuming. Additionally, the last thing you want to do to improve the quality of research is raise the incentive for people to game the results by making the massive amounts of potential government funding dependent on positive research results.