I just finished reading Parasite Rex by Carl Zimmer, a fascinating book about parasites. It's only been fairly recently that there's been real serious study of parasites in biology. Add that to the inherent difficulty of studying parasites (since they live inside other living things) and we discover that we really don't know a lot about parasites overall.
What we do know, though, is very surprising. For one, there are tons of species of parasites. An estimated 75% of all species are parasites. All free living animals have to deal with parasites, and many animals have to deal with multiple parasites, if not whole arrays of parasites. We should mention that there's no reason to distinguish an infectious disease from a parasite: cold viruses and malaria protozoa are just as much parasites living at your expense as a tape worm or hook worm. Secondly, many parasites have very sophisticated life cycles, not uncommonly passing through two or three hosts over the course of a life and perhaps equally as many life stages (such as the way a butterfly goes through the stages of caterpillar, chrysalis, butterfly). There's a parasite, for example, the lancet liver fluke (Dicrocoelium dendriticum) that lives in sheep, but is passed to snails (through the cow dung) to ants (through a slime the snail secretes) back to cows (when the cows accidentally eat the ants while eating grass). Thirdly, many parasites are very good at manipulating their host for personal gain. This may be as simple as the cold virus causing us to sneeze (in the hope that we'll sneeze out the cold virus and it'll land on a new host) to parasites that change animal behavior or color to make them more vulnerable to prey. It'd be like if you were infested with a parasite that's life-cycle depended on being passed on to bears, and to further this purpose it gave infected persons a sudden urge to start a fistfight with a grizzly bear. Or (perhaps more realistically) it'd be as if a venereal disease, in order to spread itself more rapidly, made you much more horny and uninhibited in propositioning people for sex (some think Herpes might even do something like that, see Survival of the Sickest, pp 113-14). In the case of the lancet liver fluke that goes from sheep, to snails to ants, it causes infected ants to climb up to the top of a blade of grass and wait there all night (in the hope that the ant will be eaten by a cow that's feeding on grass).
There's an interesting theory out there that parasites are the reason for the development of sexual reproduction. As nice as sex is, it's not entirely obvious, from an evolutionary perspective, why it's advantageous. Without sexual reproduction, living things can reproduce more quickly and abundantly and without all the time and resources devoted to mating. Sexual reproduction has its advantages, but the sheer abundance of asexual species, such as bacteria, suggest that asexual reproduction might be better. On theory about how sexual reproduction evolved came about when a researcher, Curtis Lively, found a New Zealand snail, Potamopyrgus antipodarum, that was able to switch between sexual and asexual reproduction. He found that the snails that were more afflicted with parasites were using sexual reproduction, whereas the ones that were mostly parasite free, used asexual reproduction. Sexual reproduction allows the snails to change more rapidly, producing children that are more dissimilar to their parents and thus less likely to be afflicted by parasites. It may not necessarily be the case parasites are what pushed species in the past to adopt sexual reproduction; but it certainly is the case with these snails and may be the case with many other species.
Much of this shows that, our traditional hierarchies of food chains aren't entirely accurate. Preying on the predators at the top of the food chain are any number of parasites, which might be as ecologically important for culling the herd (in this case, reducing the number of predators) as the predators are for reducing overgrazing by herbivores they prey on. It's always interesting how even us humans, masters of the planet, are brought low by the ravages of disease and parasites.
The Aresan Clan is published four times a week (Tue, Wed, Fri, Sun). You can see what's been written so far collected here. All posts will be posted under the Aresan Clan label. For summaries of the events so far, visit here. See my previous serial Vampire Wares collected here.
Sunday, July 31, 2011
Tuesday, July 26, 2011
Software Patents
There's an hour long segment from "This American Life" called "When Patents Attack." It focuses on the rampant problem of patent trolling in the world of software patents and how far these patents have deviated from working "To promote the Progress of Science and useful Arts" and are working more to promote business models entirely built on buying up overly broad patents and then extracting money from actual innovators by forcing them to pay licensing fees. The program appropriate connects this business model to a protection racket, an old mob scheme in which local mafia would extort fees from local businesses in order to protect them. And if they didn't pay up, the mob would damage their property, burn down their building, physically assault them or such.
The scary thing is how often patents are issued with substantial prior art that should negate them. A patent is not supposed to be issued when some innovation is either obvious to others in the field or if there is prior art, namely that someone already figured out how to do it. Nonetheless, redundant patents are issued all the time. One patent they highlighted on the show Patent number 5771354, which was issued to a fellow named Chris Crawford in 1998, is a broad patent that covers things like cloud drive, online sales and automatic software updates over the internet. It should've never been issued because there was considerable prior art; and it contains nothing non-obvious since any programmer could easily figure these things out. In fact, it overlaps with a great many other issued patents. According to a software search, there were 5,503 active patents at the time this patent was issued that covered the same innovations. And yet the patent was issued and is now valuable and is being used to extort money from companies that were easily able to come up with these innovations without even being aware of this patent's existence. And this is no anomaly. An estimated 30% of patents are issues for already patented inventions.
At the very least a reform of the system of issuing patents needs to be considered, such that patents that should be negated on grounds of prior art and obviousness aren't issued so frequently. Though it also tempts one to think that the idea of software patents are a bad idea to begin with, and that we should return to the old program where software was only covered by copyright.
Added: Kent Walker, Google’s Senior Vice President & General Counsel talked with TechCrunch yesterday about patents and the big Nortel Patent Auction Google was involved in. He sees patents system as failing to encourage innovation, noting:
The scary thing is how often patents are issued with substantial prior art that should negate them. A patent is not supposed to be issued when some innovation is either obvious to others in the field or if there is prior art, namely that someone already figured out how to do it. Nonetheless, redundant patents are issued all the time. One patent they highlighted on the show Patent number 5771354, which was issued to a fellow named Chris Crawford in 1998, is a broad patent that covers things like cloud drive, online sales and automatic software updates over the internet. It should've never been issued because there was considerable prior art; and it contains nothing non-obvious since any programmer could easily figure these things out. In fact, it overlaps with a great many other issued patents. According to a software search, there were 5,503 active patents at the time this patent was issued that covered the same innovations. And yet the patent was issued and is now valuable and is being used to extort money from companies that were easily able to come up with these innovations without even being aware of this patent's existence. And this is no anomaly. An estimated 30% of patents are issues for already patented inventions.
At the very least a reform of the system of issuing patents needs to be considered, such that patents that should be negated on grounds of prior art and obviousness aren't issued so frequently. Though it also tempts one to think that the idea of software patents are a bad idea to begin with, and that we should return to the old program where software was only covered by copyright.
Added: Kent Walker, Google’s Senior Vice President & General Counsel talked with TechCrunch yesterday about patents and the big Nortel Patent Auction Google was involved in. He sees patents system as failing to encourage innovation, noting:
When you see a lot of [Venture Capital] money flowing into the acquisition and holding of patents, it’s a problem. These are not companies doing new things, they’re buying them. You see hundreds of millions and billions of dollars flowing in to exploit others...
An average patent examiner gets 15 to 20 hours per patent to see if it’s valid. It can take years to go back and correct mistakes.
There's an hour long segment from "This American Life" called "When Patents Attack." It focuses on the rampant problem of patent trolling in the world of software patents and how far these patents have deviated from working "To promote the Progress of Science and useful Arts" and are working more to promote business models entirely built on buying up overly broad patents and then extracting money from actual innovators by forcing them to pay licensing fees. The program appropriate connects this business model to a protection racket, an old mob scheme in which local mafia would extort fees from local businesses in order to protect them. And if they didn't pay up, the mob would damage their property, burn down their building, physically assault them or such.
The scary thing is how often patents are issued with substantial prior art that should negate them. A patent is not supposed to be issued when some innovation is either obvious to others in the field or if there is prior art, namely that someone already figured out how to do it. Nonetheless, redundant patents are issued all the time. One patent they highlighted on the show Patent number 5771354, which was issued to a fellow named Chris Crawford in 1998, is a broad patent that covers things like cloud drive, online sales and automatic software updates over the internet. It should've never been issued because there was considerable prior art; and it contains nothing non-obvious since any programmer could easily figure these things out. In fact, it overlaps with a great many other issued patents. According to a software search, there were 5,503 active patents at the time this patent was issued that covered the same innovations. And yet the patent was issued and is now valuable and is being used to extort money from companies that were easily able to come up with these innovations without even being aware of this patent's existence. And this is no anomaly. An estimated 30% of patents are issues for already patented inventions.
At the very least a reform of the system of issuing patents needs to be considered, such that patents that should be negated on grounds of prior art and obviousness aren't issued so frequently. Though it also tempts one to think that the idea of software patents are a bad idea to begin with, and that we should return to the old program where software was only covered by copyright.
Added: Kent Walker, Google’s Senior Vice President & General Counsel talked with TechCrunch yesterday about patents and the big Nortel Patent Auction Google was involved in. He sees patents system as failing to encourage innovation, noting:
The scary thing is how often patents are issued with substantial prior art that should negate them. A patent is not supposed to be issued when some innovation is either obvious to others in the field or if there is prior art, namely that someone already figured out how to do it. Nonetheless, redundant patents are issued all the time. One patent they highlighted on the show Patent number 5771354, which was issued to a fellow named Chris Crawford in 1998, is a broad patent that covers things like cloud drive, online sales and automatic software updates over the internet. It should've never been issued because there was considerable prior art; and it contains nothing non-obvious since any programmer could easily figure these things out. In fact, it overlaps with a great many other issued patents. According to a software search, there were 5,503 active patents at the time this patent was issued that covered the same innovations. And yet the patent was issued and is now valuable and is being used to extort money from companies that were easily able to come up with these innovations without even being aware of this patent's existence. And this is no anomaly. An estimated 30% of patents are issues for already patented inventions.
At the very least a reform of the system of issuing patents needs to be considered, such that patents that should be negated on grounds of prior art and obviousness aren't issued so frequently. Though it also tempts one to think that the idea of software patents are a bad idea to begin with, and that we should return to the old program where software was only covered by copyright.
Added: Kent Walker, Google’s Senior Vice President & General Counsel talked with TechCrunch yesterday about patents and the big Nortel Patent Auction Google was involved in. He sees patents system as failing to encourage innovation, noting:
When you see a lot of [Venture Capital] money flowing into the acquisition and holding of patents, it’s a problem. These are not companies doing new things, they’re buying them. You see hundreds of millions and billions of dollars flowing in to exploit others...
An average patent examiner gets 15 to 20 hours per patent to see if it’s valid. It can take years to go back and correct mistakes.
Software Patents
Monday, July 25, 2011
Freighter Repo Man
The Telegraph has a story about a man who's found himself a very interesting line of work. He's actually involved in broad range of issues dealing with maritime shipping and has quite a few different interests. The author of the article, Richard Grant, writes, "He works as a maritime lawyer, a ship surveyor, an insurance adjuster, a pilot and flight instructor, a stuntman for films and television, a blues drummer in New Orleans bars, and a scattershot business entrepreneur."
But the most interesting part is his work as a sort of repo man, someone who steals back freighters that have been seized.
He's even got a bad-ass name, Max Hardberger (if you put in your kid's name both "max" and "hard," it's doubtful you're expecting to raise a pushover).
Probably the least surprising part of the story is that "A Hollywood film about his escapades is planned, with The Good Pirate as a working title." That could potentially be very interesting if it does get made.
But the most interesting part is his work as a sort of repo man, someone who steals back freighters that have been seized.
Hardberger is a 62-year-old adventurer from Louisiana who specialises in stealing back ships that have been fraudulently seized in corrupt ports, mostly in Latin America and the Caribbean.For a measely $100,000, he'll swoop in and take back your seized freighter, whether it involves negotiations, brides or just outright stealing it from under the noses of the harbor guards and coast guard. The article details many of these adventures he's had over the years and is well worth reading.
He describes himself as a 'vessel repossession specialist’, a kind of maritime repo man who ghosts into tropical hellhole ports, outwits the guards and authorities, and ghosts out again with a 5,000- or 10,000-ton cargo ship, usually under cover of darkness and preferably during a heavy rainstorm.
He's even got a bad-ass name, Max Hardberger (if you put in your kid's name both "max" and "hard," it's doubtful you're expecting to raise a pushover).
Probably the least surprising part of the story is that "A Hollywood film about his escapades is planned, with The Good Pirate as a working title." That could potentially be very interesting if it does get made.
The Telegraph has a story about a man who's found himself a very interesting line of work. He's actually involved in broad range of issues dealing with maritime shipping and has quite a few different interests. The author of the article, Richard Grant, writes, "He works as a maritime lawyer, a ship surveyor, an insurance adjuster, a pilot and flight instructor, a stuntman for films and television, a blues drummer in New Orleans bars, and a scattershot business entrepreneur."
But the most interesting part is his work as a sort of repo man, someone who steals back freighters that have been seized.
He's even got a bad-ass name, Max Hardberger (if you put in your kid's name both "max" and "hard," it's doubtful you're expecting to raise a pushover).
Probably the least surprising part of the story is that "A Hollywood film about his escapades is planned, with The Good Pirate as a working title." That could potentially be very interesting if it does get made.
But the most interesting part is his work as a sort of repo man, someone who steals back freighters that have been seized.
Hardberger is a 62-year-old adventurer from Louisiana who specialises in stealing back ships that have been fraudulently seized in corrupt ports, mostly in Latin America and the Caribbean.For a measely $100,000, he'll swoop in and take back your seized freighter, whether it involves negotiations, brides or just outright stealing it from under the noses of the harbor guards and coast guard. The article details many of these adventures he's had over the years and is well worth reading.
He describes himself as a 'vessel repossession specialist’, a kind of maritime repo man who ghosts into tropical hellhole ports, outwits the guards and authorities, and ghosts out again with a 5,000- or 10,000-ton cargo ship, usually under cover of darkness and preferably during a heavy rainstorm.
He's even got a bad-ass name, Max Hardberger (if you put in your kid's name both "max" and "hard," it's doubtful you're expecting to raise a pushover).
Probably the least surprising part of the story is that "A Hollywood film about his escapades is planned, with The Good Pirate as a working title." That could potentially be very interesting if it does get made.
Freighter Repo Man
Saturday, July 23, 2011
End of the Dollar Coin Bonanza
Using dollar coins instead of paper coins saves money. Though coins are more expensive to produce (costing 8¢, versus 3.8¢ per dollar bill), since they last much longer and need to be replaced much less frequently the government could save money by producing only dollar coins. This has long been recognized and in 1997, congress passed a bill to start minting dollar coins again, creating the Sacagawea dollar, which was finally released and distributed beginning in 2000. Dollar coins, though, have never really caught on with the public, probably because they're much bulkier than paper money and additionally because the US Bureau of Engraving and Printing continues to produce the dollar bills. The government already tried dollar coins in 1979 with the Susan B Anthony coin, but it died quickly. And they continue to try, passing a 2005 law which created a new set of presidential dollar coins, which are being minted, but have largely gone uncirculated.
In an attempt to get the dollar coins more widely distributed, the US Mint instituted a program where individuals could buy rolls of the dollar coins at face value. In fact, the US Mint would even ship the coins to you for free and accepted many forms of payment.
People with credit cards that had rewards programs at some point realized that there was an opportunity in this. If you purchase, say, 1000 dollar coins for $1000 with your credit card, you could then take this $1000 in coins, deposit it in your bank account and then use the balance to pay off credit card bill. On net, you've neither lost nor gained any money, but, by using your credit card, you've added reward points, which you can accumulate. These rewards points you could use for free flights or free gifts or gift certificates or whatever.
The loser in all this was the US Mint, which had to pay the credit card fees and the shipping. Even worse, the coins weren't getting distributed, as was the point of this whole thing, since the banks would usually just end up shipping them back to the Federal Reserve after customers deposited them, contributing to an ever increasing cache of dollar coins in the Fed's vaults.
It couldn't last forever, though. The US Mint started to realize what was going on when the same people came back again and again to make large purchases of dollar coins. The Mint first started restricting the number of purchases that people could make, and then started getting in contact with customers to make sure the coins were for legitimate business purposes.
Such measures mitigated the problem, but it still continued. In fact, word was getting out (first on NPR's Planet Money on July 13, then on MSN Money on July 15), with the likely prospect of the problem only getting worse. Thus, beginning yesterday, the US Mint stopped accepting credit cards for buying dollar coins. You can still buy the dollar coins, but only with wire transfer, check or money order.
The lesson to be learned from this: if you find a really cool way of making money like this, don't tell anyone.
In an attempt to get the dollar coins more widely distributed, the US Mint instituted a program where individuals could buy rolls of the dollar coins at face value. In fact, the US Mint would even ship the coins to you for free and accepted many forms of payment.
People with credit cards that had rewards programs at some point realized that there was an opportunity in this. If you purchase, say, 1000 dollar coins for $1000 with your credit card, you could then take this $1000 in coins, deposit it in your bank account and then use the balance to pay off credit card bill. On net, you've neither lost nor gained any money, but, by using your credit card, you've added reward points, which you can accumulate. These rewards points you could use for free flights or free gifts or gift certificates or whatever.
The loser in all this was the US Mint, which had to pay the credit card fees and the shipping. Even worse, the coins weren't getting distributed, as was the point of this whole thing, since the banks would usually just end up shipping them back to the Federal Reserve after customers deposited them, contributing to an ever increasing cache of dollar coins in the Fed's vaults.
It couldn't last forever, though. The US Mint started to realize what was going on when the same people came back again and again to make large purchases of dollar coins. The Mint first started restricting the number of purchases that people could make, and then started getting in contact with customers to make sure the coins were for legitimate business purposes.
Such measures mitigated the problem, but it still continued. In fact, word was getting out (first on NPR's Planet Money on July 13, then on MSN Money on July 15), with the likely prospect of the problem only getting worse. Thus, beginning yesterday, the US Mint stopped accepting credit cards for buying dollar coins. You can still buy the dollar coins, but only with wire transfer, check or money order.
The lesson to be learned from this: if you find a really cool way of making money like this, don't tell anyone.
Using dollar coins instead of paper coins saves money. Though coins are more expensive to produce (costing 8¢, versus 3.8¢ per dollar bill), since they last much longer and need to be replaced much less frequently the government could save money by producing only dollar coins. This has long been recognized and in 1997, congress passed a bill to start minting dollar coins again, creating the Sacagawea dollar, which was finally released and distributed beginning in 2000. Dollar coins, though, have never really caught on with the public, probably because they're much bulkier than paper money and additionally because the US Bureau of Engraving and Printing continues to produce the dollar bills. The government already tried dollar coins in 1979 with the Susan B Anthony coin, but it died quickly. And they continue to try, passing a 2005 law which created a new set of presidential dollar coins, which are being minted, but have largely gone uncirculated.
In an attempt to get the dollar coins more widely distributed, the US Mint instituted a program where individuals could buy rolls of the dollar coins at face value. In fact, the US Mint would even ship the coins to you for free and accepted many forms of payment.
People with credit cards that had rewards programs at some point realized that there was an opportunity in this. If you purchase, say, 1000 dollar coins for $1000 with your credit card, you could then take this $1000 in coins, deposit it in your bank account and then use the balance to pay off credit card bill. On net, you've neither lost nor gained any money, but, by using your credit card, you've added reward points, which you can accumulate. These rewards points you could use for free flights or free gifts or gift certificates or whatever.
The loser in all this was the US Mint, which had to pay the credit card fees and the shipping. Even worse, the coins weren't getting distributed, as was the point of this whole thing, since the banks would usually just end up shipping them back to the Federal Reserve after customers deposited them, contributing to an ever increasing cache of dollar coins in the Fed's vaults.
It couldn't last forever, though. The US Mint started to realize what was going on when the same people came back again and again to make large purchases of dollar coins. The Mint first started restricting the number of purchases that people could make, and then started getting in contact with customers to make sure the coins were for legitimate business purposes.
Such measures mitigated the problem, but it still continued. In fact, word was getting out (first on NPR's Planet Money on July 13, then on MSN Money on July 15), with the likely prospect of the problem only getting worse. Thus, beginning yesterday, the US Mint stopped accepting credit cards for buying dollar coins. You can still buy the dollar coins, but only with wire transfer, check or money order.
The lesson to be learned from this: if you find a really cool way of making money like this, don't tell anyone.
In an attempt to get the dollar coins more widely distributed, the US Mint instituted a program where individuals could buy rolls of the dollar coins at face value. In fact, the US Mint would even ship the coins to you for free and accepted many forms of payment.
People with credit cards that had rewards programs at some point realized that there was an opportunity in this. If you purchase, say, 1000 dollar coins for $1000 with your credit card, you could then take this $1000 in coins, deposit it in your bank account and then use the balance to pay off credit card bill. On net, you've neither lost nor gained any money, but, by using your credit card, you've added reward points, which you can accumulate. These rewards points you could use for free flights or free gifts or gift certificates or whatever.
The loser in all this was the US Mint, which had to pay the credit card fees and the shipping. Even worse, the coins weren't getting distributed, as was the point of this whole thing, since the banks would usually just end up shipping them back to the Federal Reserve after customers deposited them, contributing to an ever increasing cache of dollar coins in the Fed's vaults.
It couldn't last forever, though. The US Mint started to realize what was going on when the same people came back again and again to make large purchases of dollar coins. The Mint first started restricting the number of purchases that people could make, and then started getting in contact with customers to make sure the coins were for legitimate business purposes.
Such measures mitigated the problem, but it still continued. In fact, word was getting out (first on NPR's Planet Money on July 13, then on MSN Money on July 15), with the likely prospect of the problem only getting worse. Thus, beginning yesterday, the US Mint stopped accepting credit cards for buying dollar coins. You can still buy the dollar coins, but only with wire transfer, check or money order.
The lesson to be learned from this: if you find a really cool way of making money like this, don't tell anyone.
End of the Dollar Coin Bonanza
Inspiration vs Copyright Infringement
Michael Zhang asks the question "At What Point Does Inspiration Turn Into Copyright Infringement?" Namely, if an artist is inspired by another artist and uses similar elements or lifts pieces from another artists work, how similar do the works have to be for it to count as copyright infringement? For example, if you love the character of Jay Gatsby and wanted to create a character based on him, would it be copyright infringement if your character is a newly rich man in love with a woman from old money? what if he had fallen in love with her when he was just a poor soldier? what if he became rich working for the mob? what if his name is Jay? Where exactly do you draw the line?
The example that Zhang points to is not a good example. Janine Gordon is suing photographer Ryan McGinley claiming he stole photography ideas from her and that his works represent blatant copying, but the similarities between their photographs seem superficial.
Mike Masnick, I think rightly, points out, "Honestly, it's difficult for me to even say that McGinley's are 'inspired' by Gordon's, let alone copies."
José Freire, a gallery owner who's worked with McGinley responded in more detail, saying:
Nonetheless, the question still stands. How different do two works of art, or even elements within a work of art, have to be, to be considered inspiration/homage/reference and how different to make it theft/infringement/derivative, even leaving the legal question aside.
The whole problem is exacerbated by the fact that there is no objective way of measuring difference or originality. People have very strong opinions about apparent rip-offs, but they're entirely subjective. It's not like the judge in this case can pull out some super secret originality yardstick from behind his bench and measure the respective differences and declare with certainty whether it is or is not above the statutory limit. The subjectiveness of such questions are fine for art history, since critics are free to squabble over such questions for generations on end. But it's a big problem for copyright law, since ultimately someone, whether it be judge or jury, is going to be put into the position of deciding the question, and their quite arbitrary and personal decision is going to be exalted to legal fact by the force of law.
Though I agree with Freire that the case is without merit, it's entirely possible that a sympathetic judge will rule against him. It might be fun to discuss and debate about such questions of originality, but it's deadly serious when potentially hundreds of thousands, if not millions of dollars ride on the answer.
The example that Zhang points to is not a good example. Janine Gordon is suing photographer Ryan McGinley claiming he stole photography ideas from her and that his works represent blatant copying, but the similarities between their photographs seem superficial.
Mike Masnick, I think rightly, points out, "Honestly, it's difficult for me to even say that McGinley's are 'inspired' by Gordon's, let alone copies."
José Freire, a gallery owner who's worked with McGinley responded in more detail, saying:
Among the artists named in reviews and essays about McGinley over the years one will find: Richard Avedon, Robert Mapplethorpe, Irving Penn, Man Ray, Alfred Steiglitz, Peter Hujar, Edward Weston, Catherine Opie, William Eggleston, Ansel Adams, and Dash Snow. Janine Gordon’s name has never once appeared as a comparison. These references, by numerous preeminent critics and curators, were not made to cast doubt on McGinley’s artistic process but rather to describe the status to which his work aspires....
Gordon’s claims for originality are extraordinary: she claims to have invented, among other things: visible grain and other errors in the image; the injection of the monochromatic into photography; the depiction of chaos; the use of smoke; the documentation of sub-cultures; and certain types of rudimentary composition (such as placing figures in the center of the page; or in a dynamic relationship to the edge of the image). She even appears to lay claim to “the kiss” as a “concept.”...
[Gordon] states that there are 150 instances of “copyright violation”, however, these include numbers of images which are video stills taken by persons other than McGinley during extensive commercial shoots, pictures not even taken by McGinley, and images which resemble each other only if cropped; rotated; inverted, rendered in grayscale, or otherwise dramatically altered.That last point is important, not all the 150 examples of supposed infringement are actually photos taken by McGinley, including the fourth image embedded above, of the the three people laying on the bed. And, unsprurprisingly, Gordon has a history of such lawsuits.
We are confident that Gordon’s case has absolutely no merit whatsoever and that her litigation will ultimately do more damage to herself than to McGinley.
Nonetheless, the question still stands. How different do two works of art, or even elements within a work of art, have to be, to be considered inspiration/homage/reference and how different to make it theft/infringement/derivative, even leaving the legal question aside.
The whole problem is exacerbated by the fact that there is no objective way of measuring difference or originality. People have very strong opinions about apparent rip-offs, but they're entirely subjective. It's not like the judge in this case can pull out some super secret originality yardstick from behind his bench and measure the respective differences and declare with certainty whether it is or is not above the statutory limit. The subjectiveness of such questions are fine for art history, since critics are free to squabble over such questions for generations on end. But it's a big problem for copyright law, since ultimately someone, whether it be judge or jury, is going to be put into the position of deciding the question, and their quite arbitrary and personal decision is going to be exalted to legal fact by the force of law.
Though I agree with Freire that the case is without merit, it's entirely possible that a sympathetic judge will rule against him. It might be fun to discuss and debate about such questions of originality, but it's deadly serious when potentially hundreds of thousands, if not millions of dollars ride on the answer.
Michael Zhang asks the question "At What Point Does Inspiration Turn Into Copyright Infringement?" Namely, if an artist is inspired by another artist and uses similar elements or lifts pieces from another artists work, how similar do the works have to be for it to count as copyright infringement? For example, if you love the character of Jay Gatsby and wanted to create a character based on him, would it be copyright infringement if your character is a newly rich man in love with a woman from old money? what if he had fallen in love with her when he was just a poor soldier? what if he became rich working for the mob? what if his name is Jay? Where exactly do you draw the line?
The example that Zhang points to is not a good example. Janine Gordon is suing photographer Ryan McGinley claiming he stole photography ideas from her and that his works represent blatant copying, but the similarities between their photographs seem superficial.
Mike Masnick, I think rightly, points out, "Honestly, it's difficult for me to even say that McGinley's are 'inspired' by Gordon's, let alone copies."
José Freire, a gallery owner who's worked with McGinley responded in more detail, saying:
Nonetheless, the question still stands. How different do two works of art, or even elements within a work of art, have to be, to be considered inspiration/homage/reference and how different to make it theft/infringement/derivative, even leaving the legal question aside.
The whole problem is exacerbated by the fact that there is no objective way of measuring difference or originality. People have very strong opinions about apparent rip-offs, but they're entirely subjective. It's not like the judge in this case can pull out some super secret originality yardstick from behind his bench and measure the respective differences and declare with certainty whether it is or is not above the statutory limit. The subjectiveness of such questions are fine for art history, since critics are free to squabble over such questions for generations on end. But it's a big problem for copyright law, since ultimately someone, whether it be judge or jury, is going to be put into the position of deciding the question, and their quite arbitrary and personal decision is going to be exalted to legal fact by the force of law.
Though I agree with Freire that the case is without merit, it's entirely possible that a sympathetic judge will rule against him. It might be fun to discuss and debate about such questions of originality, but it's deadly serious when potentially hundreds of thousands, if not millions of dollars ride on the answer.
The example that Zhang points to is not a good example. Janine Gordon is suing photographer Ryan McGinley claiming he stole photography ideas from her and that his works represent blatant copying, but the similarities between their photographs seem superficial.
Mike Masnick, I think rightly, points out, "Honestly, it's difficult for me to even say that McGinley's are 'inspired' by Gordon's, let alone copies."
José Freire, a gallery owner who's worked with McGinley responded in more detail, saying:
Among the artists named in reviews and essays about McGinley over the years one will find: Richard Avedon, Robert Mapplethorpe, Irving Penn, Man Ray, Alfred Steiglitz, Peter Hujar, Edward Weston, Catherine Opie, William Eggleston, Ansel Adams, and Dash Snow. Janine Gordon’s name has never once appeared as a comparison. These references, by numerous preeminent critics and curators, were not made to cast doubt on McGinley’s artistic process but rather to describe the status to which his work aspires....
Gordon’s claims for originality are extraordinary: she claims to have invented, among other things: visible grain and other errors in the image; the injection of the monochromatic into photography; the depiction of chaos; the use of smoke; the documentation of sub-cultures; and certain types of rudimentary composition (such as placing figures in the center of the page; or in a dynamic relationship to the edge of the image). She even appears to lay claim to “the kiss” as a “concept.”...
[Gordon] states that there are 150 instances of “copyright violation”, however, these include numbers of images which are video stills taken by persons other than McGinley during extensive commercial shoots, pictures not even taken by McGinley, and images which resemble each other only if cropped; rotated; inverted, rendered in grayscale, or otherwise dramatically altered.That last point is important, not all the 150 examples of supposed infringement are actually photos taken by McGinley, including the fourth image embedded above, of the the three people laying on the bed. And, unsprurprisingly, Gordon has a history of such lawsuits.
We are confident that Gordon’s case has absolutely no merit whatsoever and that her litigation will ultimately do more damage to herself than to McGinley.
Nonetheless, the question still stands. How different do two works of art, or even elements within a work of art, have to be, to be considered inspiration/homage/reference and how different to make it theft/infringement/derivative, even leaving the legal question aside.
The whole problem is exacerbated by the fact that there is no objective way of measuring difference or originality. People have very strong opinions about apparent rip-offs, but they're entirely subjective. It's not like the judge in this case can pull out some super secret originality yardstick from behind his bench and measure the respective differences and declare with certainty whether it is or is not above the statutory limit. The subjectiveness of such questions are fine for art history, since critics are free to squabble over such questions for generations on end. But it's a big problem for copyright law, since ultimately someone, whether it be judge or jury, is going to be put into the position of deciding the question, and their quite arbitrary and personal decision is going to be exalted to legal fact by the force of law.
Though I agree with Freire that the case is without merit, it's entirely possible that a sympathetic judge will rule against him. It might be fun to discuss and debate about such questions of originality, but it's deadly serious when potentially hundreds of thousands, if not millions of dollars ride on the answer.
Inspiration vs Copyright Infringement
Friday, July 22, 2011
How to write a review
Robert Pinsky talks about what makes a good book review by looking at a famously malicious review John Keats' Endymion received when it was published in 1818. The review was written with gleeful acerbity by John Wilson Croker. The review fails, not just because it savages a true poetic masterpiece, but because it fails to do what a review is supposed to do. Pinsky explains that there are three things a review should include:
1. The review must tell what the book is about.It seems like a good way to do a review to me. I remember when I used to read Peter Travers' movie reviews in Rolling Stones, he once explained that the purpose of a movie review is to get people to the movies they're going to like. A review is not merely a platform for a reviewer's personal taste; a reviewer is not some sort of anointed cultural gatekeeper; a reviewer is just someone there to help potential customers.
2. The review must tell what the book's author says about that thing the book is about.
3. The review must tell what the reviewer thinks about what the book's author says about that thing the book is about.
Robert Pinsky talks about what makes a good book review by looking at a famously malicious review John Keats' Endymion received when it was published in 1818. The review was written with gleeful acerbity by John Wilson Croker. The review fails, not just because it savages a true poetic masterpiece, but because it fails to do what a review is supposed to do. Pinsky explains that there are three things a review should include:
1. The review must tell what the book is about.It seems like a good way to do a review to me. I remember when I used to read Peter Travers' movie reviews in Rolling Stones, he once explained that the purpose of a movie review is to get people to the movies they're going to like. A review is not merely a platform for a reviewer's personal taste; a reviewer is not some sort of anointed cultural gatekeeper; a reviewer is just someone there to help potential customers.
2. The review must tell what the book's author says about that thing the book is about.
3. The review must tell what the reviewer thinks about what the book's author says about that thing the book is about.
How to write a review
Monday, July 18, 2011
Borders Closing
Borders is closing down. They've been in bankruptcy proceedings since February, and, after unsuccessful attempts to sell the company off, they've decided to liquidate their assets and close all their stores. The key lesson is that the world changes and even the most dominant company can be brought down by shifts in the market. Not only have people been getting more of their print books online, mostly via amazon, but they've also been buying more ebooks and fewer print books, such that ebook sales surpass print book sales (as I mentioned earlier).
As with all changes there'll be some things that will be lost, just as big advantages are gained. This, not infrequently, leads to lots of, usually unnecessary, worrying among some people about that which is lost. Michael J. De La Merced and Julie Bosman write:
The loss of serendipity resulting from stumbling upon a book you haven't heard of while browsing through the stacks seems rather silly to me, since amazon has expended great effort in trying to encourage this same type of serendipity. Amazon has its "Customer Who Bought this Item Also Bought," "Bestseller Rank" and "Customers Also Bought Items by" lists as well as its user-generated "Listmania!" and "So You'd Like to..." lists, which are all very useful for finding other books you haven't heard of. All it takes is a little browsing through these links to discover totally new titles of interest
Also the lack of "clear place to browse for books" seems equally silly. Just because people can't walk through a bookstore doesn't mean they'll stop reading books. In fact, one of the reasons that Borders is failing is that people have decided they prefer browsing through an online bookstore from the convenience of their computer. The publishers make it seem like bricks and mortar bookstores have been unwillingly taken from consumers, whereas the reality is that its the consumers who have largely taken themselves out of the bookstores. People browse online. They browse on amazon. They browse on ebooks stores. They get recommendations from friends. They read book reviews online. People browse differently and thy read differently. This doesn't mean there's no demand for books anymore. Just less demand for print books.
So, the world continues to change. Things that were once one way aren't so anymore, and I can't deny that some things will be missed. I have fond memories of browsing through Borders stores and reading stuff I pulled off the shelf. But I, like most other consumers, like getting my books online better.
As with all changes there'll be some things that will be lost, just as big advantages are gained. This, not infrequently, leads to lots of, usually unnecessary, worrying among some people about that which is lost. Michael J. De La Merced and Julie Bosman write:
The news exposed one of publishers’ deepest fears: that bookstores will go the way of the record store, leaving potential customers without the experience of stumbling upon a book and making an impulse purchase. In the most grim scenario, publishers have worried that without a clear place to browse for books, consumers could turn to one of the many other forms of entertainment available and leave books behind.
The loss of serendipity resulting from stumbling upon a book you haven't heard of while browsing through the stacks seems rather silly to me, since amazon has expended great effort in trying to encourage this same type of serendipity. Amazon has its "Customer Who Bought this Item Also Bought," "Bestseller Rank" and "Customers Also Bought Items by" lists as well as its user-generated "Listmania!" and "So You'd Like to..." lists, which are all very useful for finding other books you haven't heard of. All it takes is a little browsing through these links to discover totally new titles of interest
Also the lack of "clear place to browse for books" seems equally silly. Just because people can't walk through a bookstore doesn't mean they'll stop reading books. In fact, one of the reasons that Borders is failing is that people have decided they prefer browsing through an online bookstore from the convenience of their computer. The publishers make it seem like bricks and mortar bookstores have been unwillingly taken from consumers, whereas the reality is that its the consumers who have largely taken themselves out of the bookstores. People browse online. They browse on amazon. They browse on ebooks stores. They get recommendations from friends. They read book reviews online. People browse differently and thy read differently. This doesn't mean there's no demand for books anymore. Just less demand for print books.
So, the world continues to change. Things that were once one way aren't so anymore, and I can't deny that some things will be missed. I have fond memories of browsing through Borders stores and reading stuff I pulled off the shelf. But I, like most other consumers, like getting my books online better.
Borders is closing down. They've been in bankruptcy proceedings since February, and, after unsuccessful attempts to sell the company off, they've decided to liquidate their assets and close all their stores. The key lesson is that the world changes and even the most dominant company can be brought down by shifts in the market. Not only have people been getting more of their print books online, mostly via amazon, but they've also been buying more ebooks and fewer print books, such that ebook sales surpass print book sales (as I mentioned earlier).
As with all changes there'll be some things that will be lost, just as big advantages are gained. This, not infrequently, leads to lots of, usually unnecessary, worrying among some people about that which is lost. Michael J. De La Merced and Julie Bosman write:
The loss of serendipity resulting from stumbling upon a book you haven't heard of while browsing through the stacks seems rather silly to me, since amazon has expended great effort in trying to encourage this same type of serendipity. Amazon has its "Customer Who Bought this Item Also Bought," "Bestseller Rank" and "Customers Also Bought Items by" lists as well as its user-generated "Listmania!" and "So You'd Like to..." lists, which are all very useful for finding other books you haven't heard of. All it takes is a little browsing through these links to discover totally new titles of interest
Also the lack of "clear place to browse for books" seems equally silly. Just because people can't walk through a bookstore doesn't mean they'll stop reading books. In fact, one of the reasons that Borders is failing is that people have decided they prefer browsing through an online bookstore from the convenience of their computer. The publishers make it seem like bricks and mortar bookstores have been unwillingly taken from consumers, whereas the reality is that its the consumers who have largely taken themselves out of the bookstores. People browse online. They browse on amazon. They browse on ebooks stores. They get recommendations from friends. They read book reviews online. People browse differently and thy read differently. This doesn't mean there's no demand for books anymore. Just less demand for print books.
So, the world continues to change. Things that were once one way aren't so anymore, and I can't deny that some things will be missed. I have fond memories of browsing through Borders stores and reading stuff I pulled off the shelf. But I, like most other consumers, like getting my books online better.
As with all changes there'll be some things that will be lost, just as big advantages are gained. This, not infrequently, leads to lots of, usually unnecessary, worrying among some people about that which is lost. Michael J. De La Merced and Julie Bosman write:
The news exposed one of publishers’ deepest fears: that bookstores will go the way of the record store, leaving potential customers without the experience of stumbling upon a book and making an impulse purchase. In the most grim scenario, publishers have worried that without a clear place to browse for books, consumers could turn to one of the many other forms of entertainment available and leave books behind.
The loss of serendipity resulting from stumbling upon a book you haven't heard of while browsing through the stacks seems rather silly to me, since amazon has expended great effort in trying to encourage this same type of serendipity. Amazon has its "Customer Who Bought this Item Also Bought," "Bestseller Rank" and "Customers Also Bought Items by" lists as well as its user-generated "Listmania!" and "So You'd Like to..." lists, which are all very useful for finding other books you haven't heard of. All it takes is a little browsing through these links to discover totally new titles of interest
Also the lack of "clear place to browse for books" seems equally silly. Just because people can't walk through a bookstore doesn't mean they'll stop reading books. In fact, one of the reasons that Borders is failing is that people have decided they prefer browsing through an online bookstore from the convenience of their computer. The publishers make it seem like bricks and mortar bookstores have been unwillingly taken from consumers, whereas the reality is that its the consumers who have largely taken themselves out of the bookstores. People browse online. They browse on amazon. They browse on ebooks stores. They get recommendations from friends. They read book reviews online. People browse differently and thy read differently. This doesn't mean there's no demand for books anymore. Just less demand for print books.
So, the world continues to change. Things that were once one way aren't so anymore, and I can't deny that some things will be missed. I have fond memories of browsing through Borders stores and reading stuff I pulled off the shelf. But I, like most other consumers, like getting my books online better.
Borders Closing
Sunday, July 17, 2011
Tim Hartford Ted Talk
Tim Hartford gives a Ted talk discussing trial and error and the God Complex. He defines the god-complex as a belief that no matter how complex the situation or problem a person faces, that person believes they are infallibly right. Hartford says the antidote to this problem is trial and error.
The real advantage of trial and error really is that it allows us to solve problems that surpass our ability to understand them. I remember I was reading Ray Kurzweil a long time ago, and he argued at one point, basically, that evolution is a sort of simplistic intelligence. Because it works through trial and error, it can produce living things of intelligence that far surpass itself (like us humans, for example), though, because it is rather simple and crude, it moves slowly, very slowly. Kurzweil was, of course, talking about the singularity and making a point about how with much more sophisticated intelligences like humans and, some day, super-intelligent machines, we can accelerate this evolution. But the relevant insight for us here is that the great advantage of trial and error is that it's the means to successfully accomplish things that we only something much smarter than us could comprehend.
Hartford calls it an antidote to the God-complex, simply because it so elegantly shows us that we don't understand things that we think we understand. But he ends with the point that, though everyone knows that trial and error is great, it's criminally underutilized because the God-complex is so seductive. There are so many people that are so confident that they are right, and since people believe much more in people who are much more confident, such people have influence, all too much influence, I'm afraid.
The real advantage of trial and error really is that it allows us to solve problems that surpass our ability to understand them. I remember I was reading Ray Kurzweil a long time ago, and he argued at one point, basically, that evolution is a sort of simplistic intelligence. Because it works through trial and error, it can produce living things of intelligence that far surpass itself (like us humans, for example), though, because it is rather simple and crude, it moves slowly, very slowly. Kurzweil was, of course, talking about the singularity and making a point about how with much more sophisticated intelligences like humans and, some day, super-intelligent machines, we can accelerate this evolution. But the relevant insight for us here is that the great advantage of trial and error is that it's the means to successfully accomplish things that we only something much smarter than us could comprehend.
Hartford calls it an antidote to the God-complex, simply because it so elegantly shows us that we don't understand things that we think we understand. But he ends with the point that, though everyone knows that trial and error is great, it's criminally underutilized because the God-complex is so seductive. There are so many people that are so confident that they are right, and since people believe much more in people who are much more confident, such people have influence, all too much influence, I'm afraid.
Tim Hartford gives a Ted talk discussing trial and error and the God Complex. He defines the god-complex as a belief that no matter how complex the situation or problem a person faces, that person believes they are infallibly right. Hartford says the antidote to this problem is trial and error.
The real advantage of trial and error really is that it allows us to solve problems that surpass our ability to understand them. I remember I was reading Ray Kurzweil a long time ago, and he argued at one point, basically, that evolution is a sort of simplistic intelligence. Because it works through trial and error, it can produce living things of intelligence that far surpass itself (like us humans, for example), though, because it is rather simple and crude, it moves slowly, very slowly. Kurzweil was, of course, talking about the singularity and making a point about how with much more sophisticated intelligences like humans and, some day, super-intelligent machines, we can accelerate this evolution. But the relevant insight for us here is that the great advantage of trial and error is that it's the means to successfully accomplish things that we only something much smarter than us could comprehend.
Hartford calls it an antidote to the God-complex, simply because it so elegantly shows us that we don't understand things that we think we understand. But he ends with the point that, though everyone knows that trial and error is great, it's criminally underutilized because the God-complex is so seductive. There are so many people that are so confident that they are right, and since people believe much more in people who are much more confident, such people have influence, all too much influence, I'm afraid.
The real advantage of trial and error really is that it allows us to solve problems that surpass our ability to understand them. I remember I was reading Ray Kurzweil a long time ago, and he argued at one point, basically, that evolution is a sort of simplistic intelligence. Because it works through trial and error, it can produce living things of intelligence that far surpass itself (like us humans, for example), though, because it is rather simple and crude, it moves slowly, very slowly. Kurzweil was, of course, talking about the singularity and making a point about how with much more sophisticated intelligences like humans and, some day, super-intelligent machines, we can accelerate this evolution. But the relevant insight for us here is that the great advantage of trial and error is that it's the means to successfully accomplish things that we only something much smarter than us could comprehend.
Hartford calls it an antidote to the God-complex, simply because it so elegantly shows us that we don't understand things that we think we understand. But he ends with the point that, though everyone knows that trial and error is great, it's criminally underutilized because the God-complex is so seductive. There are so many people that are so confident that they are right, and since people believe much more in people who are much more confident, such people have influence, all too much influence, I'm afraid.
Tim Hartford Ted Talk
Friday, July 15, 2011
How the internet affects our memory
New research shows that we tend to remember more poorly things that we think we can look up. As the New York Times describes it:
Ronald Bailey at Reason magazine makes the appropriate connection to Plato's Phaedrus (274e-275b), where Socrates had criticized writing's effect on memory saying:
But Socrates has turned out to be wrong. Writing doesn't diminish our memory; it just changes the way we remember things. Just as writing was in Plato's day, the internet is sort of becoming a back up drive for our memory, a place to go to access stuff we don't have enough room for on our main drive. It supplements our memory. Wegner calls this type of thing "transactive memory," a place "where information is stored collectively outside the brain.
And there's nothing new here. We've always used other sources to supplement our memory, whether it's asking friends, looking it up in books or checking our notes. The difference is that now the internet is almost exclusively filling that role.
Dr. Sparrow and her collaborators, Daniel M. Wegner of Harvard and Jenny Liu of the University of Wisconsin, Madison, staged four different memory experiments. In one, participants typed 40 bits of trivia — for example, “an ostrich’s eye is bigger than its brain” — into a computer. Half of the subjects believed the information would be saved in the computer; the other half believed the items they typed would be erased.In other words, we don't put as much effort into remembering things we don't think we have to remember. Does it show that our memories are poorer because of the internet? No. In fact, nowadays we're barraged with so much more information and trivia than we were in the past that learning what is important to actually retain in your noggin and what you can just look up later if you ever need it is a really important skill. In fact, the researchers note that people tend to remember better how to find the info or where it is stored, rather than the info itself.
The subjects were significantly more likely to remember information if they thought they would not be able to find it later. “Participants did not make the effort to remember when they thought they could later look up the trivia statement they had read,” the authors write.
Ronald Bailey at Reason magazine makes the appropriate connection to Plato's Phaedrus (274e-275b), where Socrates had criticized writing's effect on memory saying:
this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.
But Socrates has turned out to be wrong. Writing doesn't diminish our memory; it just changes the way we remember things. Just as writing was in Plato's day, the internet is sort of becoming a back up drive for our memory, a place to go to access stuff we don't have enough room for on our main drive. It supplements our memory. Wegner calls this type of thing "transactive memory," a place "where information is stored collectively outside the brain.
And there's nothing new here. We've always used other sources to supplement our memory, whether it's asking friends, looking it up in books or checking our notes. The difference is that now the internet is almost exclusively filling that role.
New research shows that we tend to remember more poorly things that we think we can look up. As the New York Times describes it:
Ronald Bailey at Reason magazine makes the appropriate connection to Plato's Phaedrus (274e-275b), where Socrates had criticized writing's effect on memory saying:
But Socrates has turned out to be wrong. Writing doesn't diminish our memory; it just changes the way we remember things. Just as writing was in Plato's day, the internet is sort of becoming a back up drive for our memory, a place to go to access stuff we don't have enough room for on our main drive. It supplements our memory. Wegner calls this type of thing "transactive memory," a place "where information is stored collectively outside the brain.
And there's nothing new here. We've always used other sources to supplement our memory, whether it's asking friends, looking it up in books or checking our notes. The difference is that now the internet is almost exclusively filling that role.
Dr. Sparrow and her collaborators, Daniel M. Wegner of Harvard and Jenny Liu of the University of Wisconsin, Madison, staged four different memory experiments. In one, participants typed 40 bits of trivia — for example, “an ostrich’s eye is bigger than its brain” — into a computer. Half of the subjects believed the information would be saved in the computer; the other half believed the items they typed would be erased.In other words, we don't put as much effort into remembering things we don't think we have to remember. Does it show that our memories are poorer because of the internet? No. In fact, nowadays we're barraged with so much more information and trivia than we were in the past that learning what is important to actually retain in your noggin and what you can just look up later if you ever need it is a really important skill. In fact, the researchers note that people tend to remember better how to find the info or where it is stored, rather than the info itself.
The subjects were significantly more likely to remember information if they thought they would not be able to find it later. “Participants did not make the effort to remember when they thought they could later look up the trivia statement they had read,” the authors write.
Ronald Bailey at Reason magazine makes the appropriate connection to Plato's Phaedrus (274e-275b), where Socrates had criticized writing's effect on memory saying:
this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.
But Socrates has turned out to be wrong. Writing doesn't diminish our memory; it just changes the way we remember things. Just as writing was in Plato's day, the internet is sort of becoming a back up drive for our memory, a place to go to access stuff we don't have enough room for on our main drive. It supplements our memory. Wegner calls this type of thing "transactive memory," a place "where information is stored collectively outside the brain.
And there's nothing new here. We've always used other sources to supplement our memory, whether it's asking friends, looking it up in books or checking our notes. The difference is that now the internet is almost exclusively filling that role.
How the internet affects our memory
Wednesday, July 13, 2011
The Oxford Shakespeare Theory
I was just noticing that Roland Emmerich is working on a movie premised on the idea that Shakespeare's plays were actually written by Oxford Earl Edward de Vere (due out 2012, a trailer is already up). My first reaction was, "How is Roland Emerich going to incorporate the destruction of famous landmarks into a movie set in Elizabethan England?" But my second reaction was, "So, they're making a movie out of the ol' Oxford-Shakespeare theory. Interesting."
The idea that the William Shakespeare from Stratord-upon-Avon was not the true author of the plays attributed to him is an old theory. First, in the nineteenth century, it was proposed that the plays were written by Francis Bacon. This theory runs into the problem that Bacon's style is fairly distinct from Shakespeare's and Bacon is not known otherwise to have written any plays.
The theory that Christopher Marlowe wrote Shakespeare's plays was next proposed. This had more plausibility since Marlowe did write plays, was very good at writing plays, and certainly had more stylistic similarity to Shakespeare. But it ran into the problem that Marlowe was dead, dying in 1593, about twenty years before Shakespeare retired in 1613. No problem. These people claimed that Marlowed faked his own death. The problem this ran into is that his death, stabbed to death in a bar fight, doesn't exactly fit a plausible description of a faked death. Marlowe was a very public figure, a well-known playwright, who was killed in a very public place, a bar, and it was followed by a post-mortem and inquest. If you want to fake your own death, you're much better off doing it in a way that leaves very few witnesses and little evidence, like say dying in a fire or explosion or plane crash or drowning at sea. Heck even in this day and age you could probably get away with faking your own death as a drowning at sea (note to future self: do not attempt). Even though faking your death in 1593, with their rather primitive forensic science, would be a lot easier then than now, it's still hard to imagine how Marlowe could get away with it.
The currently most popular theory of Shakespearian alternative authorship is the Oxford theory, attributing authorship to Edward de Vere. This is more plausible since de Vere was known to be a celebrated poet and playwright in his day, was a patron of the theater and survived until 1604, which means that we only have to assume that some of de Vere's works were performed posthumously, which is possible.
On the other hand, we should note that most Shakespeare scholars are Stratfordians, that is to say that they believe that the plays of Shakespeare were written by the William Shakespeare from Stratford-upon-Avon, not by Edward de Vere or Bacon or Marlowe or anyone else. They believe this for a number of reasons based on very good evidence. For one, there is the simple and obvious one: the plays were, in their day, widely attributed to Shakespeare. The facts that everyone said the plays were written by Shakespeare and that all of the (admittedly unauthorized) publications of the plays that name an author attribute them to Shakespeare are pretty strong evidence. Admittedly, it's possible that there was some sort of clandestine intrigue behind the scenes to obfuscate authorship, but in the absence of evidence of such intrigue, it's best not to assume that everyone was being duped. Additionally, we have good evidence that William Shakespeare of Stratford was a real person, which makes one wonder why de Vere (or one of the other supposed authors) attributed their plays to a real person, a minor actor in an acting company, instead of just making up a a pseudonym like "Eddy Veretti" or "Redox Fordbridge" or something.
Also, most scholars reject the argument, which is behind all the alternative authorship theories, that, since education wasn't as widespread then and Shakespeare wasn't from the gentry that could afford high quality education and access to books, Shakespeare simply wasn't well-educated or cultured enough to have written such plays. The truth is that Shakespeare was the son of a prominent merchant and had access to a rigorous grammar school education and certainly became well-connected with the English aristocracy as he became more prominent. Not to mention the fact that most of Shakespeare's plays are adaptations, not original works, meaning a lot of the details that Shakespeare was supposedly not able to know about, come directly from the original works he adapted. Additionally, we only have a small sliver of the plays written during Shakespeare's time, meaning that literary allusions that we now assume to be only possible for someone well-educated, may have in fact between quite commonplace in the theater community at the time. In fact, some Cambridge students, in 1601, mocked the university-trained playwrights for over-using classical allusion, and noted how Shakespeare, not university-educated, was fortunately clear of that vice (quoted here).
When I was a young English major pursuing my undergraduate education, I too toyed with the idea of alternative Shakespeare authorship, first with the Marlowe theory and later with the Oxford theory. But ultimately I dropped them because there were a couple of problems with the theories I couldn't reconcile. For one, Shakespeare became extremely wealthy during his career. He was an actor, but reportedly a relatively minor and not particularly celebrated actor. It just didn't seem plausible that a minor actor could accumulate wealth enough to become, for example, a part owner of the Globe theater.
Even more implausible for me was the idea that a prominent and dignified courtier could write such a bloody play like Titus Andronicus. In the play, not only is Titus' daughter raped and has her hands and tongue cut off, but also Tamora's sons are killed then baked into a pie and fed to her. Trying to imagine a stately Elizabethan aristocrat writing such stories is really difficult (there are authorship questions surrounding Titus Andronicus, but these don't really change things since most scholars believe Shakespeare wrote all of it or co-authored it and was still the author of these famous bloody scenes)
That being said, though I think the Oxford-Shakespeare theory is wrong, it still is an interesting and tantalizing theory. So, making a movie based on it may not be a bad idea, and it could turn out to be a good movie. It's just to say that Hollywood sort of has an unfaithful relationship with historical accuracy, and this movie will probably be no exception.
The idea that the William Shakespeare from Stratord-upon-Avon was not the true author of the plays attributed to him is an old theory. First, in the nineteenth century, it was proposed that the plays were written by Francis Bacon. This theory runs into the problem that Bacon's style is fairly distinct from Shakespeare's and Bacon is not known otherwise to have written any plays.
The theory that Christopher Marlowe wrote Shakespeare's plays was next proposed. This had more plausibility since Marlowe did write plays, was very good at writing plays, and certainly had more stylistic similarity to Shakespeare. But it ran into the problem that Marlowe was dead, dying in 1593, about twenty years before Shakespeare retired in 1613. No problem. These people claimed that Marlowed faked his own death. The problem this ran into is that his death, stabbed to death in a bar fight, doesn't exactly fit a plausible description of a faked death. Marlowe was a very public figure, a well-known playwright, who was killed in a very public place, a bar, and it was followed by a post-mortem and inquest. If you want to fake your own death, you're much better off doing it in a way that leaves very few witnesses and little evidence, like say dying in a fire or explosion or plane crash or drowning at sea. Heck even in this day and age you could probably get away with faking your own death as a drowning at sea (note to future self: do not attempt). Even though faking your death in 1593, with their rather primitive forensic science, would be a lot easier then than now, it's still hard to imagine how Marlowe could get away with it.
The currently most popular theory of Shakespearian alternative authorship is the Oxford theory, attributing authorship to Edward de Vere. This is more plausible since de Vere was known to be a celebrated poet and playwright in his day, was a patron of the theater and survived until 1604, which means that we only have to assume that some of de Vere's works were performed posthumously, which is possible.
On the other hand, we should note that most Shakespeare scholars are Stratfordians, that is to say that they believe that the plays of Shakespeare were written by the William Shakespeare from Stratford-upon-Avon, not by Edward de Vere or Bacon or Marlowe or anyone else. They believe this for a number of reasons based on very good evidence. For one, there is the simple and obvious one: the plays were, in their day, widely attributed to Shakespeare. The facts that everyone said the plays were written by Shakespeare and that all of the (admittedly unauthorized) publications of the plays that name an author attribute them to Shakespeare are pretty strong evidence. Admittedly, it's possible that there was some sort of clandestine intrigue behind the scenes to obfuscate authorship, but in the absence of evidence of such intrigue, it's best not to assume that everyone was being duped. Additionally, we have good evidence that William Shakespeare of Stratford was a real person, which makes one wonder why de Vere (or one of the other supposed authors) attributed their plays to a real person, a minor actor in an acting company, instead of just making up a a pseudonym like "Eddy Veretti" or "Redox Fordbridge" or something.
Also, most scholars reject the argument, which is behind all the alternative authorship theories, that, since education wasn't as widespread then and Shakespeare wasn't from the gentry that could afford high quality education and access to books, Shakespeare simply wasn't well-educated or cultured enough to have written such plays. The truth is that Shakespeare was the son of a prominent merchant and had access to a rigorous grammar school education and certainly became well-connected with the English aristocracy as he became more prominent. Not to mention the fact that most of Shakespeare's plays are adaptations, not original works, meaning a lot of the details that Shakespeare was supposedly not able to know about, come directly from the original works he adapted. Additionally, we only have a small sliver of the plays written during Shakespeare's time, meaning that literary allusions that we now assume to be only possible for someone well-educated, may have in fact between quite commonplace in the theater community at the time. In fact, some Cambridge students, in 1601, mocked the university-trained playwrights for over-using classical allusion, and noted how Shakespeare, not university-educated, was fortunately clear of that vice (quoted here).
When I was a young English major pursuing my undergraduate education, I too toyed with the idea of alternative Shakespeare authorship, first with the Marlowe theory and later with the Oxford theory. But ultimately I dropped them because there were a couple of problems with the theories I couldn't reconcile. For one, Shakespeare became extremely wealthy during his career. He was an actor, but reportedly a relatively minor and not particularly celebrated actor. It just didn't seem plausible that a minor actor could accumulate wealth enough to become, for example, a part owner of the Globe theater.
Even more implausible for me was the idea that a prominent and dignified courtier could write such a bloody play like Titus Andronicus. In the play, not only is Titus' daughter raped and has her hands and tongue cut off, but also Tamora's sons are killed then baked into a pie and fed to her. Trying to imagine a stately Elizabethan aristocrat writing such stories is really difficult (there are authorship questions surrounding Titus Andronicus, but these don't really change things since most scholars believe Shakespeare wrote all of it or co-authored it and was still the author of these famous bloody scenes)
That being said, though I think the Oxford-Shakespeare theory is wrong, it still is an interesting and tantalizing theory. So, making a movie based on it may not be a bad idea, and it could turn out to be a good movie. It's just to say that Hollywood sort of has an unfaithful relationship with historical accuracy, and this movie will probably be no exception.
I was just noticing that Roland Emmerich is working on a movie premised on the idea that Shakespeare's plays were actually written by Oxford Earl Edward de Vere (due out 2012, a trailer is already up). My first reaction was, "How is Roland Emerich going to incorporate the destruction of famous landmarks into a movie set in Elizabethan England?" But my second reaction was, "So, they're making a movie out of the ol' Oxford-Shakespeare theory. Interesting."
The idea that the William Shakespeare from Stratord-upon-Avon was not the true author of the plays attributed to him is an old theory. First, in the nineteenth century, it was proposed that the plays were written by Francis Bacon. This theory runs into the problem that Bacon's style is fairly distinct from Shakespeare's and Bacon is not known otherwise to have written any plays.
The theory that Christopher Marlowe wrote Shakespeare's plays was next proposed. This had more plausibility since Marlowe did write plays, was very good at writing plays, and certainly had more stylistic similarity to Shakespeare. But it ran into the problem that Marlowe was dead, dying in 1593, about twenty years before Shakespeare retired in 1613. No problem. These people claimed that Marlowed faked his own death. The problem this ran into is that his death, stabbed to death in a bar fight, doesn't exactly fit a plausible description of a faked death. Marlowe was a very public figure, a well-known playwright, who was killed in a very public place, a bar, and it was followed by a post-mortem and inquest. If you want to fake your own death, you're much better off doing it in a way that leaves very few witnesses and little evidence, like say dying in a fire or explosion or plane crash or drowning at sea. Heck even in this day and age you could probably get away with faking your own death as a drowning at sea (note to future self: do not attempt). Even though faking your death in 1593, with their rather primitive forensic science, would be a lot easier then than now, it's still hard to imagine how Marlowe could get away with it.
The currently most popular theory of Shakespearian alternative authorship is the Oxford theory, attributing authorship to Edward de Vere. This is more plausible since de Vere was known to be a celebrated poet and playwright in his day, was a patron of the theater and survived until 1604, which means that we only have to assume that some of de Vere's works were performed posthumously, which is possible.
On the other hand, we should note that most Shakespeare scholars are Stratfordians, that is to say that they believe that the plays of Shakespeare were written by the William Shakespeare from Stratford-upon-Avon, not by Edward de Vere or Bacon or Marlowe or anyone else. They believe this for a number of reasons based on very good evidence. For one, there is the simple and obvious one: the plays were, in their day, widely attributed to Shakespeare. The facts that everyone said the plays were written by Shakespeare and that all of the (admittedly unauthorized) publications of the plays that name an author attribute them to Shakespeare are pretty strong evidence. Admittedly, it's possible that there was some sort of clandestine intrigue behind the scenes to obfuscate authorship, but in the absence of evidence of such intrigue, it's best not to assume that everyone was being duped. Additionally, we have good evidence that William Shakespeare of Stratford was a real person, which makes one wonder why de Vere (or one of the other supposed authors) attributed their plays to a real person, a minor actor in an acting company, instead of just making up a a pseudonym like "Eddy Veretti" or "Redox Fordbridge" or something.
Also, most scholars reject the argument, which is behind all the alternative authorship theories, that, since education wasn't as widespread then and Shakespeare wasn't from the gentry that could afford high quality education and access to books, Shakespeare simply wasn't well-educated or cultured enough to have written such plays. The truth is that Shakespeare was the son of a prominent merchant and had access to a rigorous grammar school education and certainly became well-connected with the English aristocracy as he became more prominent. Not to mention the fact that most of Shakespeare's plays are adaptations, not original works, meaning a lot of the details that Shakespeare was supposedly not able to know about, come directly from the original works he adapted. Additionally, we only have a small sliver of the plays written during Shakespeare's time, meaning that literary allusions that we now assume to be only possible for someone well-educated, may have in fact between quite commonplace in the theater community at the time. In fact, some Cambridge students, in 1601, mocked the university-trained playwrights for over-using classical allusion, and noted how Shakespeare, not university-educated, was fortunately clear of that vice (quoted here).
When I was a young English major pursuing my undergraduate education, I too toyed with the idea of alternative Shakespeare authorship, first with the Marlowe theory and later with the Oxford theory. But ultimately I dropped them because there were a couple of problems with the theories I couldn't reconcile. For one, Shakespeare became extremely wealthy during his career. He was an actor, but reportedly a relatively minor and not particularly celebrated actor. It just didn't seem plausible that a minor actor could accumulate wealth enough to become, for example, a part owner of the Globe theater.
Even more implausible for me was the idea that a prominent and dignified courtier could write such a bloody play like Titus Andronicus. In the play, not only is Titus' daughter raped and has her hands and tongue cut off, but also Tamora's sons are killed then baked into a pie and fed to her. Trying to imagine a stately Elizabethan aristocrat writing such stories is really difficult (there are authorship questions surrounding Titus Andronicus, but these don't really change things since most scholars believe Shakespeare wrote all of it or co-authored it and was still the author of these famous bloody scenes)
That being said, though I think the Oxford-Shakespeare theory is wrong, it still is an interesting and tantalizing theory. So, making a movie based on it may not be a bad idea, and it could turn out to be a good movie. It's just to say that Hollywood sort of has an unfaithful relationship with historical accuracy, and this movie will probably be no exception.
The idea that the William Shakespeare from Stratord-upon-Avon was not the true author of the plays attributed to him is an old theory. First, in the nineteenth century, it was proposed that the plays were written by Francis Bacon. This theory runs into the problem that Bacon's style is fairly distinct from Shakespeare's and Bacon is not known otherwise to have written any plays.
The theory that Christopher Marlowe wrote Shakespeare's plays was next proposed. This had more plausibility since Marlowe did write plays, was very good at writing plays, and certainly had more stylistic similarity to Shakespeare. But it ran into the problem that Marlowe was dead, dying in 1593, about twenty years before Shakespeare retired in 1613. No problem. These people claimed that Marlowed faked his own death. The problem this ran into is that his death, stabbed to death in a bar fight, doesn't exactly fit a plausible description of a faked death. Marlowe was a very public figure, a well-known playwright, who was killed in a very public place, a bar, and it was followed by a post-mortem and inquest. If you want to fake your own death, you're much better off doing it in a way that leaves very few witnesses and little evidence, like say dying in a fire or explosion or plane crash or drowning at sea. Heck even in this day and age you could probably get away with faking your own death as a drowning at sea (note to future self: do not attempt). Even though faking your death in 1593, with their rather primitive forensic science, would be a lot easier then than now, it's still hard to imagine how Marlowe could get away with it.
The currently most popular theory of Shakespearian alternative authorship is the Oxford theory, attributing authorship to Edward de Vere. This is more plausible since de Vere was known to be a celebrated poet and playwright in his day, was a patron of the theater and survived until 1604, which means that we only have to assume that some of de Vere's works were performed posthumously, which is possible.
On the other hand, we should note that most Shakespeare scholars are Stratfordians, that is to say that they believe that the plays of Shakespeare were written by the William Shakespeare from Stratford-upon-Avon, not by Edward de Vere or Bacon or Marlowe or anyone else. They believe this for a number of reasons based on very good evidence. For one, there is the simple and obvious one: the plays were, in their day, widely attributed to Shakespeare. The facts that everyone said the plays were written by Shakespeare and that all of the (admittedly unauthorized) publications of the plays that name an author attribute them to Shakespeare are pretty strong evidence. Admittedly, it's possible that there was some sort of clandestine intrigue behind the scenes to obfuscate authorship, but in the absence of evidence of such intrigue, it's best not to assume that everyone was being duped. Additionally, we have good evidence that William Shakespeare of Stratford was a real person, which makes one wonder why de Vere (or one of the other supposed authors) attributed their plays to a real person, a minor actor in an acting company, instead of just making up a a pseudonym like "Eddy Veretti" or "Redox Fordbridge" or something.
Also, most scholars reject the argument, which is behind all the alternative authorship theories, that, since education wasn't as widespread then and Shakespeare wasn't from the gentry that could afford high quality education and access to books, Shakespeare simply wasn't well-educated or cultured enough to have written such plays. The truth is that Shakespeare was the son of a prominent merchant and had access to a rigorous grammar school education and certainly became well-connected with the English aristocracy as he became more prominent. Not to mention the fact that most of Shakespeare's plays are adaptations, not original works, meaning a lot of the details that Shakespeare was supposedly not able to know about, come directly from the original works he adapted. Additionally, we only have a small sliver of the plays written during Shakespeare's time, meaning that literary allusions that we now assume to be only possible for someone well-educated, may have in fact between quite commonplace in the theater community at the time. In fact, some Cambridge students, in 1601, mocked the university-trained playwrights for over-using classical allusion, and noted how Shakespeare, not university-educated, was fortunately clear of that vice (quoted here).
When I was a young English major pursuing my undergraduate education, I too toyed with the idea of alternative Shakespeare authorship, first with the Marlowe theory and later with the Oxford theory. But ultimately I dropped them because there were a couple of problems with the theories I couldn't reconcile. For one, Shakespeare became extremely wealthy during his career. He was an actor, but reportedly a relatively minor and not particularly celebrated actor. It just didn't seem plausible that a minor actor could accumulate wealth enough to become, for example, a part owner of the Globe theater.
Even more implausible for me was the idea that a prominent and dignified courtier could write such a bloody play like Titus Andronicus. In the play, not only is Titus' daughter raped and has her hands and tongue cut off, but also Tamora's sons are killed then baked into a pie and fed to her. Trying to imagine a stately Elizabethan aristocrat writing such stories is really difficult (there are authorship questions surrounding Titus Andronicus, but these don't really change things since most scholars believe Shakespeare wrote all of it or co-authored it and was still the author of these famous bloody scenes)
That being said, though I think the Oxford-Shakespeare theory is wrong, it still is an interesting and tantalizing theory. So, making a movie based on it may not be a bad idea, and it could turn out to be a good movie. It's just to say that Hollywood sort of has an unfaithful relationship with historical accuracy, and this movie will probably be no exception.
The Oxford Shakespeare Theory
Tuesday, July 12, 2011
Unrealistic fantasies
A while ago I posted about how romance novels seem to be for women what porn is for men. So, it's interesting to see, at around the same time, a study by some authors that claim that romance novels lead to sexual health problems and a rant by Naomi Wolf claiming that porn leads men to have mental health problems, such as porn addiction and a propensity for extreme sex.
Both of the articles present the veneer of science credibility, but neither appears to have much weight to them. Mind Hacks takes apart Wolf's argument pretty succinctly, showing how she misuses some fact and basically doesn't understand the neurochemistry she presents in the article. The article about romance novels, similarly, doesn't seem to present any evidence of its conclusion. The article basically looks at romance novels, sees their content and pretty much says, "if women take these stories as realistic expectation for what to expect from real-world romances, that'll lead to problems," which is tantamount to saying, "If young kids read Harry Potter and start to expect that they're going to very soon discover that they're wizards capable of magic, that'll lead to problems." Nowhere does the article actually prove that women pick up romance novels thinking they're going to be education and informative.
So, I guess, maybe, porn and romance novels aren't that bad after all. Oh well. I guess we can go back to letting people occasionally and temporarily escape into their unrealistic fantasies.
Both of the articles present the veneer of science credibility, but neither appears to have much weight to them. Mind Hacks takes apart Wolf's argument pretty succinctly, showing how she misuses some fact and basically doesn't understand the neurochemistry she presents in the article. The article about romance novels, similarly, doesn't seem to present any evidence of its conclusion. The article basically looks at romance novels, sees their content and pretty much says, "if women take these stories as realistic expectation for what to expect from real-world romances, that'll lead to problems," which is tantamount to saying, "If young kids read Harry Potter and start to expect that they're going to very soon discover that they're wizards capable of magic, that'll lead to problems." Nowhere does the article actually prove that women pick up romance novels thinking they're going to be education and informative.
So, I guess, maybe, porn and romance novels aren't that bad after all. Oh well. I guess we can go back to letting people occasionally and temporarily escape into their unrealistic fantasies.
A while ago I posted about how romance novels seem to be for women what porn is for men. So, it's interesting to see, at around the same time, a study by some authors that claim that romance novels lead to sexual health problems and a rant by Naomi Wolf claiming that porn leads men to have mental health problems, such as porn addiction and a propensity for extreme sex.
Both of the articles present the veneer of science credibility, but neither appears to have much weight to them. Mind Hacks takes apart Wolf's argument pretty succinctly, showing how she misuses some fact and basically doesn't understand the neurochemistry she presents in the article. The article about romance novels, similarly, doesn't seem to present any evidence of its conclusion. The article basically looks at romance novels, sees their content and pretty much says, "if women take these stories as realistic expectation for what to expect from real-world romances, that'll lead to problems," which is tantamount to saying, "If young kids read Harry Potter and start to expect that they're going to very soon discover that they're wizards capable of magic, that'll lead to problems." Nowhere does the article actually prove that women pick up romance novels thinking they're going to be education and informative.
So, I guess, maybe, porn and romance novels aren't that bad after all. Oh well. I guess we can go back to letting people occasionally and temporarily escape into their unrealistic fantasies.
Both of the articles present the veneer of science credibility, but neither appears to have much weight to them. Mind Hacks takes apart Wolf's argument pretty succinctly, showing how she misuses some fact and basically doesn't understand the neurochemistry she presents in the article. The article about romance novels, similarly, doesn't seem to present any evidence of its conclusion. The article basically looks at romance novels, sees their content and pretty much says, "if women take these stories as realistic expectation for what to expect from real-world romances, that'll lead to problems," which is tantamount to saying, "If young kids read Harry Potter and start to expect that they're going to very soon discover that they're wizards capable of magic, that'll lead to problems." Nowhere does the article actually prove that women pick up romance novels thinking they're going to be education and informative.
So, I guess, maybe, porn and romance novels aren't that bad after all. Oh well. I guess we can go back to letting people occasionally and temporarily escape into their unrealistic fantasies.
Unrealistic fantasies
Sunday, July 10, 2011
Everything is a remix
I also wanted to mention that the third part of Kirby Ferguson's series "Everything is a Remix," is up and is totally worth watching. The "Everything is a remix" series overall is quite good, and this installment is no exception. The basic premise is that all innovation and invention in art, science and technology is really just remixing. Ferguson defines remixing as: "To combine or edit existing materials to produce something new," and goes on how to explain how music, film, and (in this installment) invention, are just remixes.
In short, what makes new innovations new is really just that they take old things and combine them in new ways.
I remember reading a critique that Jacques Derrida made of Claude Levi-Strauss. Levi-Strauss had said that his methodology as an archaeologist was bricolage, namely taking existing and available tools and using them for new purposes. The opposite would be an engineer, who constructs the proper tool for the proper purpose. Derrida's critique was that all discourse is bricolage. He made this point by pointing to the fact that a new thinker simply couldn't reconstruct "language, syntax, and lexicon" from scratch. In short, all intellectual discourse, that is to say all intellectual history, is bricollage because the thinkers, the philosophers, scientists, historians, economists and so on, are taking language and ideas and trying to use them to to describe new ideas that the language and ideas weren't specifically designed for.
Or, to put Derrida's point in the terms that Ferguson co-opted from music to describe the history of innovation: all intellectual history, including all philosophy, is a remix.
In short, what makes new innovations new is really just that they take old things and combine them in new ways.
I remember reading a critique that Jacques Derrida made of Claude Levi-Strauss. Levi-Strauss had said that his methodology as an archaeologist was bricolage, namely taking existing and available tools and using them for new purposes. The opposite would be an engineer, who constructs the proper tool for the proper purpose. Derrida's critique was that all discourse is bricolage. He made this point by pointing to the fact that a new thinker simply couldn't reconstruct "language, syntax, and lexicon" from scratch. In short, all intellectual discourse, that is to say all intellectual history, is bricollage because the thinkers, the philosophers, scientists, historians, economists and so on, are taking language and ideas and trying to use them to to describe new ideas that the language and ideas weren't specifically designed for.
Or, to put Derrida's point in the terms that Ferguson co-opted from music to describe the history of innovation: all intellectual history, including all philosophy, is a remix.
I also wanted to mention that the third part of Kirby Ferguson's series "Everything is a Remix," is up and is totally worth watching. The "Everything is a remix" series overall is quite good, and this installment is no exception. The basic premise is that all innovation and invention in art, science and technology is really just remixing. Ferguson defines remixing as: "To combine or edit existing materials to produce something new," and goes on how to explain how music, film, and (in this installment) invention, are just remixes.
In short, what makes new innovations new is really just that they take old things and combine them in new ways.
I remember reading a critique that Jacques Derrida made of Claude Levi-Strauss. Levi-Strauss had said that his methodology as an archaeologist was bricolage, namely taking existing and available tools and using them for new purposes. The opposite would be an engineer, who constructs the proper tool for the proper purpose. Derrida's critique was that all discourse is bricolage. He made this point by pointing to the fact that a new thinker simply couldn't reconstruct "language, syntax, and lexicon" from scratch. In short, all intellectual discourse, that is to say all intellectual history, is bricollage because the thinkers, the philosophers, scientists, historians, economists and so on, are taking language and ideas and trying to use them to to describe new ideas that the language and ideas weren't specifically designed for.
Or, to put Derrida's point in the terms that Ferguson co-opted from music to describe the history of innovation: all intellectual history, including all philosophy, is a remix.
In short, what makes new innovations new is really just that they take old things and combine them in new ways.
I remember reading a critique that Jacques Derrida made of Claude Levi-Strauss. Levi-Strauss had said that his methodology as an archaeologist was bricolage, namely taking existing and available tools and using them for new purposes. The opposite would be an engineer, who constructs the proper tool for the proper purpose. Derrida's critique was that all discourse is bricolage. He made this point by pointing to the fact that a new thinker simply couldn't reconstruct "language, syntax, and lexicon" from scratch. In short, all intellectual discourse, that is to say all intellectual history, is bricollage because the thinkers, the philosophers, scientists, historians, economists and so on, are taking language and ideas and trying to use them to to describe new ideas that the language and ideas weren't specifically designed for.
Or, to put Derrida's point in the terms that Ferguson co-opted from music to describe the history of innovation: all intellectual history, including all philosophy, is a remix.
Everything is a remix
Germinating good ideas
Been reading Steven Johnson's Where Good Ideas Come From. The basic premise of the book is that good ideas really involve putting together and integrating existing ideas. Thus, the way to promote good idea is to permit openness and connectivity, so that people can take other people's good ideas (or nascent ideas) and pull them together to create new good ideas.
He talks for one about how many good ideas come from talking with and sharing with other people. We have this perception that good ideas come from lone geniuses dreaming up brilliant pieces of insight in profound "Eureka!" moments while meditating alone. The truth is that most ideas come from people talking and collaborating. He notes that in research labs, most of the big breakthroughs actually come about through staff meetings and presentations when people share and critique their discoveries.
He also notes that sudden flashes of insight are not the norm. We may perceive ideas as coming to us of a sudden like a bolt of lightning, but the truth is that good ideas germinate slowly. What we perceive as Eureka moments are just one salient step along a long process of careful consideration. Since good ideas take a lot of time and germination, it's really useful to write down our thoughts. A good idea doesn't leap fully formed from your head, but needs to be germinated. And to germinate a good idea you need to remember it, so that you can return to it and re-return to it so you can add to it and refine it. This is the reason it was extremely common for great thinkers from the 17th century forward to take voluminous notes, filling notebooks with ideas, quotes, scrap of thought and experiences. These notebooks would facilitate the germination as these thinkers would return to their notebooks and rethink the information that seemed most worth notice.
As I'm working on my dissertation, which has inevitably taken me deep into Nietzsche's voluminous notebooks, such insights certainly ring true with my experience. Not only did he take voluminous notes, but in those notebooks are the initial insights that would ultimately lead to his more famous ideas. The Revaluation of All Values, which I am in particular studying, began as a crude idea when Nietzsche was just a young professor fresh out of college, but didn't really become the fully fledged idea that you read about in philosophy textbooks until well over ten years later.
What is also interesting about Johnson's insights is that the notebook, at least for some people, has been replaced by something arguably better: the blog. Though people do use blogs for different things, for many people it's like an open notebook, where you can link to stories and ideas you like, engage in debates and record your thoughts and experiences, just like people would do with their notebooks in the past. Except it's better than the notebook because it has the quality of openness and sharing that the standard closed notebook lacks. Thus, it can be used to germinate ideas for the author of the blog, as well as share those germinal ideas with other, potentially benefitting them.
He talks for one about how many good ideas come from talking with and sharing with other people. We have this perception that good ideas come from lone geniuses dreaming up brilliant pieces of insight in profound "Eureka!" moments while meditating alone. The truth is that most ideas come from people talking and collaborating. He notes that in research labs, most of the big breakthroughs actually come about through staff meetings and presentations when people share and critique their discoveries.
He also notes that sudden flashes of insight are not the norm. We may perceive ideas as coming to us of a sudden like a bolt of lightning, but the truth is that good ideas germinate slowly. What we perceive as Eureka moments are just one salient step along a long process of careful consideration. Since good ideas take a lot of time and germination, it's really useful to write down our thoughts. A good idea doesn't leap fully formed from your head, but needs to be germinated. And to germinate a good idea you need to remember it, so that you can return to it and re-return to it so you can add to it and refine it. This is the reason it was extremely common for great thinkers from the 17th century forward to take voluminous notes, filling notebooks with ideas, quotes, scrap of thought and experiences. These notebooks would facilitate the germination as these thinkers would return to their notebooks and rethink the information that seemed most worth notice.
As I'm working on my dissertation, which has inevitably taken me deep into Nietzsche's voluminous notebooks, such insights certainly ring true with my experience. Not only did he take voluminous notes, but in those notebooks are the initial insights that would ultimately lead to his more famous ideas. The Revaluation of All Values, which I am in particular studying, began as a crude idea when Nietzsche was just a young professor fresh out of college, but didn't really become the fully fledged idea that you read about in philosophy textbooks until well over ten years later.
What is also interesting about Johnson's insights is that the notebook, at least for some people, has been replaced by something arguably better: the blog. Though people do use blogs for different things, for many people it's like an open notebook, where you can link to stories and ideas you like, engage in debates and record your thoughts and experiences, just like people would do with their notebooks in the past. Except it's better than the notebook because it has the quality of openness and sharing that the standard closed notebook lacks. Thus, it can be used to germinate ideas for the author of the blog, as well as share those germinal ideas with other, potentially benefitting them.
Been reading Steven Johnson's Where Good Ideas Come From. The basic premise of the book is that good ideas really involve putting together and integrating existing ideas. Thus, the way to promote good idea is to permit openness and connectivity, so that people can take other people's good ideas (or nascent ideas) and pull them together to create new good ideas.
He talks for one about how many good ideas come from talking with and sharing with other people. We have this perception that good ideas come from lone geniuses dreaming up brilliant pieces of insight in profound "Eureka!" moments while meditating alone. The truth is that most ideas come from people talking and collaborating. He notes that in research labs, most of the big breakthroughs actually come about through staff meetings and presentations when people share and critique their discoveries.
He also notes that sudden flashes of insight are not the norm. We may perceive ideas as coming to us of a sudden like a bolt of lightning, but the truth is that good ideas germinate slowly. What we perceive as Eureka moments are just one salient step along a long process of careful consideration. Since good ideas take a lot of time and germination, it's really useful to write down our thoughts. A good idea doesn't leap fully formed from your head, but needs to be germinated. And to germinate a good idea you need to remember it, so that you can return to it and re-return to it so you can add to it and refine it. This is the reason it was extremely common for great thinkers from the 17th century forward to take voluminous notes, filling notebooks with ideas, quotes, scrap of thought and experiences. These notebooks would facilitate the germination as these thinkers would return to their notebooks and rethink the information that seemed most worth notice.
As I'm working on my dissertation, which has inevitably taken me deep into Nietzsche's voluminous notebooks, such insights certainly ring true with my experience. Not only did he take voluminous notes, but in those notebooks are the initial insights that would ultimately lead to his more famous ideas. The Revaluation of All Values, which I am in particular studying, began as a crude idea when Nietzsche was just a young professor fresh out of college, but didn't really become the fully fledged idea that you read about in philosophy textbooks until well over ten years later.
What is also interesting about Johnson's insights is that the notebook, at least for some people, has been replaced by something arguably better: the blog. Though people do use blogs for different things, for many people it's like an open notebook, where you can link to stories and ideas you like, engage in debates and record your thoughts and experiences, just like people would do with their notebooks in the past. Except it's better than the notebook because it has the quality of openness and sharing that the standard closed notebook lacks. Thus, it can be used to germinate ideas for the author of the blog, as well as share those germinal ideas with other, potentially benefitting them.
He talks for one about how many good ideas come from talking with and sharing with other people. We have this perception that good ideas come from lone geniuses dreaming up brilliant pieces of insight in profound "Eureka!" moments while meditating alone. The truth is that most ideas come from people talking and collaborating. He notes that in research labs, most of the big breakthroughs actually come about through staff meetings and presentations when people share and critique their discoveries.
He also notes that sudden flashes of insight are not the norm. We may perceive ideas as coming to us of a sudden like a bolt of lightning, but the truth is that good ideas germinate slowly. What we perceive as Eureka moments are just one salient step along a long process of careful consideration. Since good ideas take a lot of time and germination, it's really useful to write down our thoughts. A good idea doesn't leap fully formed from your head, but needs to be germinated. And to germinate a good idea you need to remember it, so that you can return to it and re-return to it so you can add to it and refine it. This is the reason it was extremely common for great thinkers from the 17th century forward to take voluminous notes, filling notebooks with ideas, quotes, scrap of thought and experiences. These notebooks would facilitate the germination as these thinkers would return to their notebooks and rethink the information that seemed most worth notice.
As I'm working on my dissertation, which has inevitably taken me deep into Nietzsche's voluminous notebooks, such insights certainly ring true with my experience. Not only did he take voluminous notes, but in those notebooks are the initial insights that would ultimately lead to his more famous ideas. The Revaluation of All Values, which I am in particular studying, began as a crude idea when Nietzsche was just a young professor fresh out of college, but didn't really become the fully fledged idea that you read about in philosophy textbooks until well over ten years later.
What is also interesting about Johnson's insights is that the notebook, at least for some people, has been replaced by something arguably better: the blog. Though people do use blogs for different things, for many people it's like an open notebook, where you can link to stories and ideas you like, engage in debates and record your thoughts and experiences, just like people would do with their notebooks in the past. Except it's better than the notebook because it has the quality of openness and sharing that the standard closed notebook lacks. Thus, it can be used to germinate ideas for the author of the blog, as well as share those germinal ideas with other, potentially benefitting them.
Germinating good ideas
Friday, July 8, 2011
Suitable
In Oak Park, MI they have a law that says that front yards must have "suitable, live, plant material." A woman decided to plant a vegetable garden in her front yard, but a city planner, Kevin Rulkowski subsequently decided it violated that law. Clearly it wasn't the "live, plant material part," since her vegetable garden is unambiguously that. The problem was with the ever-vague and subjective "suitable."
Of course what is "suitable" is entirely up to personal opinion, but Rulkowki decided to justify his decision by claiming:
Unsurprisingly, I'm not the first person to go to Merriam-Webster to confirm Rulkowski's mistake. It's actually led to a number of comments, 19 at the present count. My favorite is, "Suggestion for new antonym: Rulkowski."
An even better one can be found on the commentary to this story at The Agitator, where commenter "Jeff" says: "Walking through my little city, I occasionally see gardening boxes in a front yard. It always makes me smile, as it seems like a good use of space. Dare I say suitable?"
I would speculate that perhaps Rulkowski's dictionary is broken, though I suspect the problem is user errror.
Of course what is "suitable" is entirely up to personal opinion, but Rulkowki decided to justify his decision by claiming:
If you look at the definition of what suitable is in Webster's dictionary, it will say common. So, if you look around and you look in any other community, what's common to a front yard is a nice, grass yard with beautiful trees and bushes and flowersSince I was pretty sure that there is no connection between "suitable" and "common" in the english language, I went to my dictionary, and sure enough it uses words like "right" and "appropriate" but not "common." Just to check, I went to Merriam-Webster's online dictionary for a definition of "suitable," and again found words like "fitting" and "proper," but not a whiff of "common" there either.
Unsurprisingly, I'm not the first person to go to Merriam-Webster to confirm Rulkowski's mistake. It's actually led to a number of comments, 19 at the present count. My favorite is, "Suggestion for new antonym: Rulkowski."
An even better one can be found on the commentary to this story at The Agitator, where commenter "Jeff" says: "Walking through my little city, I occasionally see gardening boxes in a front yard. It always makes me smile, as it seems like a good use of space. Dare I say suitable?"
I would speculate that perhaps Rulkowski's dictionary is broken, though I suspect the problem is user errror.
In Oak Park, MI they have a law that says that front yards must have "suitable, live, plant material." A woman decided to plant a vegetable garden in her front yard, but a city planner, Kevin Rulkowski subsequently decided it violated that law. Clearly it wasn't the "live, plant material part," since her vegetable garden is unambiguously that. The problem was with the ever-vague and subjective "suitable."
Of course what is "suitable" is entirely up to personal opinion, but Rulkowki decided to justify his decision by claiming:
Unsurprisingly, I'm not the first person to go to Merriam-Webster to confirm Rulkowski's mistake. It's actually led to a number of comments, 19 at the present count. My favorite is, "Suggestion for new antonym: Rulkowski."
An even better one can be found on the commentary to this story at The Agitator, where commenter "Jeff" says: "Walking through my little city, I occasionally see gardening boxes in a front yard. It always makes me smile, as it seems like a good use of space. Dare I say suitable?"
I would speculate that perhaps Rulkowski's dictionary is broken, though I suspect the problem is user errror.
Of course what is "suitable" is entirely up to personal opinion, but Rulkowki decided to justify his decision by claiming:
If you look at the definition of what suitable is in Webster's dictionary, it will say common. So, if you look around and you look in any other community, what's common to a front yard is a nice, grass yard with beautiful trees and bushes and flowersSince I was pretty sure that there is no connection between "suitable" and "common" in the english language, I went to my dictionary, and sure enough it uses words like "right" and "appropriate" but not "common." Just to check, I went to Merriam-Webster's online dictionary for a definition of "suitable," and again found words like "fitting" and "proper," but not a whiff of "common" there either.
Unsurprisingly, I'm not the first person to go to Merriam-Webster to confirm Rulkowski's mistake. It's actually led to a number of comments, 19 at the present count. My favorite is, "Suggestion for new antonym: Rulkowski."
An even better one can be found on the commentary to this story at The Agitator, where commenter "Jeff" says: "Walking through my little city, I occasionally see gardening boxes in a front yard. It always makes me smile, as it seems like a good use of space. Dare I say suitable?"
I would speculate that perhaps Rulkowski's dictionary is broken, though I suspect the problem is user errror.
Suitable
Subscribe to:
Posts (Atom)