Friday, July 15, 2016

The upcoming Olympics are to be in Rio; the news is worrisome.  Crime, disease, a marginally-functioning security and public infrastructure all raise questions about whether it will be a success or failure.  Thinking back, many cities have had "successful" Olympics, but big financial problems - it took Montreal 30 years to pay off the debts it incurred hosting the 1976 Olympics.

Why not have a permanent home for the Olympics, say in Greece, near Olympia?  Other countries could still act as hosts for specific games (and receive significant revenue thereby); but the permanent facility itself would be improved each time.  Ongoing maintenance would be much cheaper than build-from-scratch every four yearas.  Our current alternative leaves former Olympic sites without any clear purpose.  See, for instance, "What Abandoned Olympic Venues from Around the World Look Like Today".

Greece might not be enough; Winter Olympics, for instance, might also want a permanent home in a colder climate.  There might be other specific sports that need their own unique sites; but the principle of establishing a long-term venue still seems a sound one.

Tuesday, November 26, 2013

A retrospective bit on Obamacare

I remember a conversation I had with a Democrat years ago, when Obamacare first burst onto the national stage. I had expressed skepticism on a number of points, to which he replied (as best as I can recall):

"Bryan, you just don't get it. This will result in a permanent Democratic majority - and the fun part is that your tax dollars will be paying for it. In a little while the benefits will be so central to people's lives that [it] will be impossible to remove. Republicans will always be the ones wanting to limit benefits, so we'll always be 'the good guys'. Not only will people vote for us because they know we'll not cut what they need, but we'll be able to paint you as 'meanspirited' from here on out."

It was a very depressing thought, but it looks like it's not quite working out that way... 

Thursday, April 18, 2013

A term popped into my head the other day: "Theoretical Genealogy".  By it I mean the mathematics of populations - genetic diffusion, the probability of surname survival, how population growth skews the pool of descendants, etc.  This post is about just one topic, suggested by some recent research: diffusion.

Diffusion

While using Family Tree Maker’s search capabilities, I found I could add quite a lot of ancestors; there is no sound way to be sure they’re correct, and it’s hard to get an intuition about the reliability of the data, but it is a relatively rapid process.  As I chased lines back into the Middle Ages the doubts increased – but if nothing else the information may be used in the future as a starting point for, say, using DNA to verify links.  That is, it’s better than nothing, though perhaps not as reliable as I would like.

I noticed one phenomenon early on – many of my lines run through England (1600’s, 1700’s), and occasionally one of these would lead to some minor nobleman.  That person would in turn lead back to more important nobility, and finally to some recognizable king (e.g., William the Conqueror).  After this sort of thing happened a few times I began to be concerned that this really represented wishful thinking of someone, or some genealogist, along the line.  Am I really descended from Billy (above), as well as Charlemagne (several ways), along with various Viking and early Irish kings?  I didn't think so  because it seemed so surprising, and began that slightly unnerving process of figuring out where in the line of descent fiction crept in.

Now I’m not so sure.  The records of the minor nobleman and his ancestors seem fairly sound, I expect because property, wealth, and some level of authority changed hands with each generation.  The main weak point was the link between my known ancestry and that nobleman – that is a big jump.  Often it seemed to be the third daughter or so of the nobleman marrying my ancestor, who might have been a well-off farmer.  That sort of thing doesn’t seem unreasonable.  In the cases when this happened in the 1600's-1700's, sometimes coincident with travel to America, it was documented.  It might not have been were it just one more element of small town life in England.

Moving backwards in time from the minor nobleman the links seem fairly solid, for the same reason as above, it mattered at the time in substantial ways.  I’m sure there is questionable data in there – perhaps the occasional infidelity not identified as such ("Yes, he's your son, ignore the red hair") – but otherwise the genealogical data seems as sound as one might expect.

However, looking at the totality of the lines I’ve traced, I find I’m related to all sorts of kings and notables, in Ireland, France, Scandanavia, etc.  Is this reasonable, or is it the cumulative effect of past genealogists fudging the data a bit to claim famous ancestors?

To address this I turned the problem around, and imagined one of these notables (king, count, whatever) – they often had some semblance of wealth and a number of children.  The oldest few, particularly the males, would likely have married into other noble families, and it is from those lines that the people holding the noble titles today are descended.  The other kids did as well as they could, but often married significantly lesser nobility.  When this process repeated itself it eventually would mean some descendants were marrying commoners - perhaps on the well-off side, but non-nobility nonetheless.  This is not at all unreasonable - and if you think about it, the descendants of the original king/count/whatever would end up spreading out throughout the population.  Working backwards, then, as genealogists do, it isn't implausible that one might be caught up in that spread and then subsequently be led to the original notable as an ancestor.

What this really is is just genetic diffusion in the population, and when looked at from that perspective it seems unsurprising.  There have been recent articles about Genghis Khan’s descendents – apparently about 10% of males in the area of his former empire are related to him (or his family; one might imagine a brother, for instance).  As future generations arise the mixing will of course go further, until nearly everyone will be able to claim him as an ancestor.  Of course the number of generations back to Genghis is so large that the percentage of genetic data from him is quite dilute, and will become more so, but this is just the other side of the genetic diffusion coin.

Of course just as many people might trace their lineage back to Charlemagne, or Genghis, there are also some likely unknowns who cast similar genetic shadows over the future - some unknown peasant father and/or mother, who had numerous children, healthy, good looking, both sons and daughters, whose descendants spread out just as widely.  We just don't know their names.  The nobility, even minor nobility, shows up in church and civil records; the farmer doesn't.  Perhaps DNA analysis might at some point in the future reveal his existence, even if tentatively. 

(It is one of the striking things about doing this kind of research - how many people left so little trace other than through their children.  Even their names are missing.  I wish I had a at least a page, or even a paragraph, of information about each - I'm sure their challenges weren't fundamentally so different from ours.)

So - is there any way to quantify any of this, even crudely?

My background is European; the population of Europe in 1700 was about 50 million.  That is about the number of your ancestors back 25 generations (assuming no repeating ancestors).  At 20 years per generation, that’s 500 years.  By 'repeating ancestors' I mean no ancestor appearing twice or more - saying it that way makes it seem easy.  Turning it around - it means that say, at the 15th generation, two people who marry must have absolutely no ancestry in common.  I think this would be rather hard in a society where long-distance travel was rare.  The 500 year figure might well be half of that - 250 years - which, while long, is perhaps close to accessible oral history, particularly in fairly stagnant populations.  In either case, 300+ years on, a crude analysis suggests that pretty much anyone in Europe then might well be one of my (or your) ancestors.

The process of genetic diffusion is an intriguing one.  One might imagine as a simplified ideal a uniform population in which any person is likely to marry any other person in the population with roughly equal probability (excluding close relatives).  In that case the mixing will be maximized, that is it will happen as rapidly as possible.  Of course such populations may not really exist – social stratification, for instance, will lead to several (possibly overlapping) subpopulations that intermix that way, but not so much with each other.  Or there may be geographical separations that cause small ‘pockets’ of population to intermix internally but not externally.  I would think population genetics might be able to detect traces of such historical isolation from the genes and histories of people today.  It would take the right sampling to be able to draw solid conclusions, but it might be done.  While it seems an abstract notion, I think it might be practical, at least for recent cases.

For example, a number of my ancestors were among the early Dutch settlers of what is now New York City.  When the city became British in 1665, the Brits considered the Dutch second-class citizens, and intermarrying with them was rare; consequently the Dutch who remained ny necessity intermarried among themselves.  The end result is that, very loosely speaking, if you have a Dutch ancestor in that group it’s not unlikely you’re related to many of the other Dutch families who were present.  This phenomenon should be detectable by examining the DNA of descendants of that time – there would be areas of commonality due to that (socially) isolated population mixing as it did.  It would be more interesting to deduce such isolated populations entirely from the DNA as a way to augment history.  Doing it in the more distant past might depend on tracking mutations, and these might not happen rapidly enough to spread through a location such as 17th century New York City.  It might be different for, say, 6th century Naples.

The resulting model might be one of relatively static pools of population, connected by some punctuated diffusion.  This might not match historical reality exactly (say, an individual might have married into an immigrant family in his town, then his descendants might have moved back to the source of the immigration), but it might be a useful model nevertheless.  It seems ripe for mathematical modeling.
 

 

Saturday, October 6, 2012

The God Abandons Antony (C.P.Cavafy)

When suddenly, at midnight, you hear
an invisible procession going by
with exquisite music, voices,
don't mourn your luck that's failing now,
work gone wrong, your plans
all proving deceptive - don't mourn then uselessly.
As one long prepared, and graced with courage,
say goodbye to her, the Alexandria that is leaving.
Above all, don't fool yourself, don't say
it was a dream, your ears deceived you:
don't degrade yourself with empty hopes like these.
As one long prepared, and graced with courage,
as is right for you who proved worthy of this kind of city,
go firmly to the window
and listen with deep emotion, but not
with the whining, the pleas of a coward;
listen - your final delectation - to the voices,
to the exquisite music of that strange procession,
and say goodbye to her, to the Alexandria you are losing.

(Edmund Keeley, Philip Sherrard, translators)

Thursday, July 14, 2011

Thoughts on the Multi-Universe Interpretation of Quantum Mechanics

Just looking around the 'net I see much written about quantum mechanics that isn't, well, sound; I hope I'm not contributing to that genre, and fear that I might be.  I should start by saying I have studied it a bit, and have a degree in Physics from MIT, but I'm certainly not an expert, and many of its intricacies I'm sure lie buried by the detritus of years of other thoughts.  I'd like to lay out the outline of an idea, as much to get it down so it won't be forgotten.
Very briefly and somewhat loosely, the multi-universe interpretation of quantum mechanics says that everything that can happen will happen; when a random event occurs (say one with two outcomes), the universe splits into two universes - in one the first outcome holds, in the second the second outcome holds.  Wikipedia has a fairly decent overview here, worth taking a moment to scan.  At first this explanation sounds a bit extreme, wasteful in universes, so to speak.  But it does address some fundamental issues that are otherwise hard to make sense of, like quantum mechanical wave collapse (when an observation transforms a system properly described by a wave function to one described essentially classically) .  In the multi-universe interpretation, this never needs to happen, because each split universe has its own observer and its own result.

I have a small (and possibly testable) modification to suggest, which I'll get to shortly, after I describe its genesis - or at least the thought that provoked the idea.  It was years ago, and I was driving to work, and pulled into a bank near my office to get some cash - something I'd done many times over the years.  As I pulled in I saw a decent place to park about halfway down the lot, and a thought sprang up unbidden: "If I park there I'll be in an accident."  So, I didn't park there.  I went into the bank, stood in line, got cash, came out - and a car was in that spot, the driver exchanging papers with a second car that had hit it.

This was of course somewhat shocking - it'd never happened before, this isn't the sort of thing that occurs in my life (I can't think of another example, certainly not as striking).  I'm not a mystic, so I began to wonder how it could have happened at all.

I've always wondered about the instant we call "now" - a Euclidean point on the time axis, of zero thickness, has always struck me as somewhat absurd.  Einstein  reportedly said "There is no 'now' in Physics", meaning there is no model of it, no description of it, indeed it doesn't appear in physical theories.  If you give this a few moments of thought it's quite amazing - our only experience of life is in the instant we call 'now', and our physical theories don't consider it at all.
 
My thought after the bank parking lot episode, many years ago, was that maybe we don't proceed through time linearly moving forwards - on average we do, perhaps for thermodynamic reasons, but maybe we move forwards a little, back a little, oscillating about what we consider to be "now".  (If you like you could take the furthest point into the future we go and label that 'now', so all of this oscillation is entirely in the past - it is only a naming convention).  Moving backwards in time would change the physics of brain processes so that it would be very unlikely that we'd have any coherent memories of the future.  Occasionally these might persist, perhaps with enough internal consistency that they would be recognizable as useful information - so I might have been 'remembering' an accident, in a manner that felt like a strong intuition to avoid the situation.  This thought has been lying dormant for a long time - perhaps appropriately, as there seemed no way to test it or explore it further.
 
Back to the multi-universe interpretation: all of those splitting universes seem a bit unnatural.  What if, instead of splits, they are different excursions into the future, that is they all occur in this universe.  Let me describe this by analogy.  Imagine a snow-covered football field, with distance along the field analogous to time (think of one goal line being a few minutes ago, the other being a few minutes on, and where you are being "now").  You start walking at the first goal line, go a few yards, walk back, go forwards again, but perhaps not in the same route; you're moving down the field slowly, but making many tracks forwards and backwards as you go.  These tracks represent the excursions into the future mentioned above, each being something like one of the split universes; and, as you walk, you may interact with the tracks you've already laid down, stumbling over earlier footprints (analogous to the sort of effect one sees in the two-slit experiment, when a photon can 'interfere with itself' and seemingly go through both slits at once), or gravitating to well-trodden areas (analogous to high-probability outcomes).  Eventually you get to the final goal line, and can think over what you've done.
 
Wave function collapse reappears because we're back in one universe, but it might be a tiny bit clearer: Your consciousness holds memories, and those of the future excursions are very weak.  As your mind makes sense of where you are it sees multiple tracks behind it, as perhaps it should (see this interesting paper on the possible literal meaning of the 'sum over histories' technique: http://philsci-archive.pitt.edu/3780/1/quantum_path_integral.pdf)
 
I'm glossing over lots of details, such as the nature interference can have, what provokes the random wandering in time, etc.  Some of these may be fatal to the entire approach.  One implication of this idea is that the movements forwards and backwards in time are discrete (you will walk across a given yard line some integer number of times, indeed an odd number of times); given that interference between the excursions is possible, there might be an experimental way to detect this, how long they might be, etc.


 

Wednesday, December 8, 2010

The Ideal Minimall

Large supermarkets I don't really like, with the exception of Whole Foods - and even then there is much in the middle of the store (organic toilet paper and the like) that I have no use for.  I tend to shop daily now - life circumstances change fast, with kids doing this and that, so it is hard to plan long-term, and I don't particularly want to most of the time.

For a few years an idea has been bouncing around my head - the 'ideal minimall'.  It would have four separate stores as its core.  One would be a good butcher shop, with a knowledgeable butcher and a wide selection of meats - quality meats, meaning no hormones, antibiotics, etc.  Local organic meats would be ideal, but that's probably not practical.  I'd also like to be able to find the kinds of things that supermarkets no longer seem to carry, like bones for making stock, or the more unusual organs (for that occasional haggis craving).

Next to this would be a greengrocer, a thing so rare that the term seems to be falling out of use.  A place to get quality vegetables, run by a person who knows something about them.  I would add fruit to this store as well.

The third store would be a fish market.  We have some fish markets in the area, and given the proximity to the sea and the existence of a local fishing fleet, these are pretty good.  The selection can be limited in some, and in others the staff doesn't seem to know as much as it should.  One of my favorites was one in Waltham, MA (I don't remember the name) which seemed to have 3 or 4 people working at all times, had a large selection, and was a place one could buy things like fishheads or lobster bodies for stock and stew purposes.  Of course there has to be enough business to justify the stock size, and enough turnover to maintain quality, so this might be difficult.  One think I love about Whole Foods is their seafood, which seems to be of high quality at every store - so I'm sure it's doable.

The fourth store would be a bakery.  Supermarket bakeries seem to bake premade or preformulated mixtures - I'm sure it guarantees uniform quality, but that quality isn't very high.  One of my fond memories from childhood is walking into a (good) bakery, and smelling the buttery sweetness that seemed to hang in the air.  A good bakery should do the full range of baked goods, from breads to cakes and pastry. 

Of course all of these stores would be able to handle custom orders, and ought to have knowledgeable staff that could offer advice as necessary. 

There might be other stores as well - a liquor store (perhaps more focused on wine than hard liquors) would be a nice addition, and a cheese/dairy outlet would be a fine addition, particularly if the products were from local farms.  A coffeehouse at one end of the minimall might provide a nice gathering place as well.

One could drive in to such a minimall and stroll from store to store, assembling a dinner.  One would likely have to make occasional trips to supermarkets for soaps, napkins, etc., but I think this sort of minimall might be quite an attraction.  One way to do it would be to find an investor willing to put up the money to buy or build the physical infrastructure, then lease to the individual markets, perhaps also with some equity in those businesses as well.  Such a plan would have to cover the contingencies of a market failing, or key personal leaving, but that shouldn't be insurmountable. 

Issues When Working With MP3s

I recently spent some time trying to put my collection of .mp3 audio files in better shape; this has been an ongoing project for years.  There really aren't any good tools to do it, which, given the popularity of that file format, is somewhat surprising.  I usually find myself doing one task in one app, then a different task in a different app.  Part of the reason I thought to write this post is to see if others have had similar experiences, or found solutions to some of the problems.

My ultimate, ideal goal is to have each .mp3 file be 'complete', with music, lyrics, album art, and credits for everyone who worked on that particular piece.  This would actually be useful - for example, one might want to listen to every piece of music Alan Parsons worked on (beyond the Alan Parsons Project).  One might trace careers of writers, producers, etc. if the tags were complete.

Another goal I've had is to attach years to each piece of music.  The goal would be to let me select music that I might hear on a radio station in a given year, say, 1977.  Of course radio stations play older music, so players really should have some algorithms for picking music from earlier times to add to the playlist.  But attaching years is a necessary first step.

This is harder than it might seem, because work is republished.  If one takes a CD issued recently, the dates attached to the album and the music will be recent, even if the CD is a re-issue of an older album.  The problem is worse with compilations - the dates are often determined by the ISBN of the CD, not the original issue dates of the music.

I've found attaching dates to be an interesting but laborious process - looking up a given song one might find much earlier versions, live versions, long versions, short versions for AM radio play, etc.  I try to use a date that reflects when I might first have heard it on the radio.  In part its interesting because some of the older versions show the evolution of the song.  For instance, while trying to date Taco's "Puttin' On the Ritz" I found that the song had been written by Irving Berlin in 1929, with several notable versions (Clark Gable 1939, Fred Astaire 1946) before Taco's version in 1983.  See for instance http://en.wikipedia.org/wiki/Puttin'_on_the_Ritz.  A surprising number of rock songs go back to early Blues songs of the 1910-1920 period, with likely earlier, but undocumented, roots.

Just managing the MP3s isn't trivial - take 'artist name', a category in pretty much every player or library app.  Now, should The Beatles be listed as "The Beatles" (which will sort with the "T"s), "Beatles" (which will sort well, but isn't the real band name), or "Beatles, The" (which kind of gets at both the real name and the right sort order).  Ideally player and library software should let you specify the name as "The Beatles", which is the correct artist name, but then show this in an alphabetical list under the "B"s, not the "T"s.   Almost inevitably one will have some songs in both categories, and the separation of B and T is such that it may be quite hard to notice this.

Some band names are listed inconsistently, e.g. Pinkard & Bowden or Pinkard and Bowden.  Some music pieces are by a particular band, but feature a guest artist - one would like this piece listed for both the band and the artist.  Then there are case issues, e.g. John McCutcheon vs. John Mccutcheon - these kinds of spelling variants creep in for a variety of reasons.  And, would one like to find the music of Carlos Santana in the "C"s or in the "S"s?  Add to this foreign names with accent marks (sometimes); bands which stay the same but make small changes to their names (e.g. John Cougar, John Cougar Mellencamp, etc.).  Semantically these should be grouped together, but they won't be.

For classical music I describe the composer as last name first, then first and middle names.  It's just how I think of classical music.  There may be other genres with other common guidelines as well.  And, while on the topic of classical music, I often tell my player to play in random order - but I really would never want to mix classical and, say, rock.  This isn't trivial to achieve; of course one may construct playlists of all classical or all rock, but then when one adds new music one must update these playlists.  The player foobar 2000 does some dynamic list assembly, which is a start.  It is a good player, but I have yet to fully tame it, and don't really have the time to dedicate to figuring it out.

Then there is 'genre': one player of mine has hundred of genres, many I just don't understand.  I don't know what "Trip Hop" is, nor how "Electronic" differs from "Electronica".  What I'd like is a smaller subset that isn't too ambiguous, and a way to constrain any new entries in the library to use one of the existing genres that I've found acceptable.  In the absence of this, I've taken to using a kind of 'path' approach, so that similar genres appear next to each other when sorted - so I'll have "guitar" (meaning guitar instrumentals), then "guitar: Spanish", "guitar: electric", etc. 

Then there is 'album': many of the songs I've got have appeared on multiple albums, e.g. the original release album and perhaps a 'greatest hits' album later on, or a different kind of compilation, perhaps one containing many artists (e.g., a Christmas album).  I would really like to link the song to all of those albums, but there is usually no way of doing this (short of keeping duplicate songs around).

'Rating': many players and managers keep ratings for songs in a separate database, not in the mp3 file itself.  I'm not sure why - perhaps because the ratings are personal, and the songs are assumed to be shared, though that makes little sense.  Every once in a while I'd have to rebuild the database for an app, and all of that information would be lost.  I find it to be very useful when putting together playlists, so I adopted a somewhat radical approach: I have subdirectories named "5", "4", "3", etc., and I move all the songs rated 5 into the "5" subdirectory.  If I have to reestablish the ratings in a given player I sort the songs by file path, then select all of those in the "5" subdir and set their rating to 5.  It takes a few minutes, but is far better than losing that information.

So - that's a quick list of issues I've had when trying to manage my mp3 library.  With all of the money and time that's been poured into the production and sale of these, I'm surprised no one has done a player or library manager that can handle all of these issues.  Foobar 2000 seems to come closest, but it's always doing something a bit strange, and I just haven't had the time to master its idiosyncracies.  I would very much like feedback on how others have dealt with these problems.