tag:curtisb.posthaven.com,2013:/posts Curtis Bartley's Blog 2018-01-15T18:28:49Z Curtis Bartley tag:curtisb.posthaven.com,2013:Post/1195192 2017-10-01T18:43:41Z 2017-10-01T18:43:41Z The Solar Garage Door
The cost of solar photovoltaics keep dropping.  In fact, solar cells have been getting cheaper every year for so long that they now have their own equivalent of Moore's Law: Swanson's law:

Swanson's law is an observation that the price of solar photovoltaic modules tends to drop 20 percent for every doubling of cumulative shipped volume. At present rates, costs halve about every 10 years.

Unfortunately, a complete photovoltaic system consists of a lot more than just solar cells, and the other components of the system are not getting as cheap so quickly.

However, this does introduce an interesting possibility: It may become cost effective to install solar panels on vertical surfaces even though they won't be as effective as a rooftop installation.  You'll need more solar panels to generate the same amount of power as a rooftop installation with a better angle.  This added cost might be entirely offset by much lower installation costs, however.  As an additional bonus, vertically mounted panels will be much easier to keep clean, especially if they're installed on first floor walls.

I've been thinking about this idea for a while, but today I started wondering how far you could take the idea, and I came up with an even more absurd idea: the solar garage door.  I'm imagining a mass-produced, standard-sized garage door with integrated solar PV panels, which is a drop in replacement for many of the garage doors currently installed in houses all across the U.S.  Replacing a garage door is by far the easiest thing I can imagining doing to the outside of a house, and is well within the capabilities of the average do-it-yourselfer.  I would expect a couple of professionals to be able to replace a typical garage door in a matter of minutes.

Obviously not every garage door is a candidate, since many will be facing the wrong direction, and there are many that are too small, or oddly-sized.  I don't know how many garages there are in the United States, but I'm guessing on the order of tens of millions.  And probably about a quarter of them are south facing.  In addition, the typical garage door will not be obstructed by trees or other foliage since there will almost always be a driveway in front of it.

OK, so realistically, a standard American two-car garage door is only about 10 or 12 square meters, and you can only expect to get 100 or 200 watts per square meter even in the best circumstances.  So the solar garage door might only produce 5 or 10 kilowatt hours over the course of a day.  That's not enough to run a house or charge an electric car.  But it might be enough to pay for itself in a couple of years, especially since the greatest power production will occur during the times when electricity is the most expensive.  And certainly this idea makes more sense for Dallas than Seattle.

Five or ten kilowatt hours is hard to get excited about in most circumstances.  But there are times when it is very exciting indeed -- when the power grid is down.  Keep in mind, as I write this Florida is recovering from Hurricane Irma, Puerto Rico has just been hammered by Hurricane Maria, and Houston is still drying out from Hurricane Harvey.

Many people keep generators for dealing with prolonged power outages.  Solar PV has some potential advantages.  It can pay for itself over time since it produces electricity for free once it's installed, and it doesn't require fuel to run.  In fact it doesn't have moving parts, so it's very reliable compared to a generator which may or may not run if you haven't used it in a long time -- or possibly ever.  And a solar system doesn't preclude buying a generator as well.  In fact, after the solar system pays for itself, it can pay for the generator too.

A solar garage door will only produce a little bit of power during the day, but even a small installation can produce enough power to charge a cell phone or a laptop, and possibly even run a freezer enough to keep it from thawing out overnight.  A refrigerator may require a battery to keep it cold enough overnight, but batteries are getting cheaper too.

So anyway, my argument for the solar garage door is that it could be an insanely cheap way to retrofit solar to an existing house, its incremental savings in the electricty bill will allow it to pay for itself, and then it provides some added security in the event of a prolonged power outage.

It also provides a way for prospective solar customers to get their feet wet with minimal risk, expense, or even hassle.  And it might be a good way for a new solar company to bootstrap its way into the emerging market (well, maybe) of vertically installed solar panels.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/1183053 2017-08-12T20:21:46Z 2017-08-12T20:21:46Z Using Algolia's Hacker News Search to get an alternative view of the current most popular posts
I've always found the ranking mechanism on Hacker News to be somewhat opaque, especially with respect to posts which are flagged.  Sufficient flagging can kill a post in the official sense, but what seems to happen more often is that a post receives one or two flags and then gets aggressively down-ranked and dropped so far down on the popular articles feed (three or four pages or more) that it might as well be dead.  I've both flagged and vouched submissions in the past, and I can say that some submissions definitely deserve to get flagged into oblivion.  The call on a lot of other submissions is more subjective however.  So sometimes I'd like to get a view of currently popular submissions without flagging taken into account.  It turns out there's an easy way to do that.

Algolia's Hacker News search page (https://hn.algolia.com) provides a way to search popular articles within a date range.  One of the pre-canned options is the last 24 hours, which provides a view similar to Hacker New's front page, but not identical.  In particular it seems rank submissions purely by upvotes.

https://hn.algolia.com/?query=&sort=byPopularity&prefix&page=0&dateRange=last24h&type=story

 
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/1182298 2017-08-10T05:24:57Z 2018-01-15T18:28:49Z How I wish Google CEO Sundar Pichai had responded to James Damore's memo about diversity.
James Damore, who at the time was a Google employee, wrote an internal memo about diversity at Google.  It did not stay internal to Google and ultimately, as they say, it went viral on the Internet. I think Google CEO Sundar Pichai should have responded to this memo in a different way than he did.  Something kind of like this:

I have not yet made a decision about how to respond to James Damore's memo about diversity at Google.  I am going to wait for the outrage to subside, for the various members of the Twitter mob to lay down their metaphorical pitchforks and return to their homes. I do not wish to be a participant in a modern-day Internet lynching -- and neither should you.  If anything Mr. Damore has written in the memo constitutes a fireable offense (as some people have alleged), then I will, indeed take that step.  However, such a drastic action does not need to take place immediately.  If it is in fact justified, then delaying it a few days will cause no great harm.  And this little bit of extra time will allow for hurt feelings and outrage to subside, and that will in turn allow us all to approach the issue with cooler heads and sounder judgment.
 
-- Imaginary Sundar Pichai

Notes and References

Pichai's actual response can be found here: https://www.blog.google/topics/diversity/note-employees-ceo-sundar-pichai/.  And, of course, he also fired Damore: https://www.bloomberg.com/news/articles/2017-08-08/google-fires-employee-behind-controversial-diversity-memo.  My contention is that even if he felt he should fire Damore, he should have waited longer, because, well, the pitchforks were still out.

There are a number of different versions of the "diversity memo" floating around on the web, some of them incomplete due to missing links.  I believe that the PDF version of the memo assembled by Vice is a fairly accurate representation of the original memo.  It can be found at: https://www.documentcloud.org/documents/3914586-Googles-Ideological-Echo-Chamber.html.  This document is linked from https://motherboard.vice.com/en_us/article/evzjww/here-are-the-citations-for-the-anti-diversity-manifesto-circulating-at-google, which also provides some useful context.

I have also attached a copy of the PDF directly to this post, in case any of the links above should go stale.

As a final note, I am referring to the memo as the "diversity memo" whereas much of the reporting about it uses the term "anti-diversity" (including the Vice link above).  Labeling this memo as "anti-diversity" demonstrates a truly profound lack of reading comprehension, since the author flatly states that he values diversity in the very first sentence of the memo.  Conor Friedersdorf addresses this issue at length in The Atlantic: https://www.theatlantic.com/politics/archive/2017/08/the-most-common-error-in-coverage-of-the-google-memo/536181/.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/1155776 2017-05-21T02:54:37Z 2017-05-21T02:54:37Z Sage-y Cornbread
Got to the grocery store too late the day before Thanksgiving, and discovered that they're out of Stove Top Stuffing Mix? Need an emergency back-up plan? (Originally posted to Facebook ~ Thanksgiving 2016, only now getting around to posting it to the blog.)

Part 1
  • 1.5 cups corn meal
  • 0.5 cups flour
  • 1 tbsp + 1 tsp baking powder (4 tsp)
  • 2 tbsp sugar
  • 0.75 tsp salt
  • ~ 2 tbsp chopped fresh sage
Mix dry ingredients thoroughly in a mixing bowl.

Part 2
  • Cooking oil
  • 1 stick butter
  • 1 cup milk
  • 1 egg
Pour about a tbsp of cooking oil (canola, etc) in a 9x9 pyrex baking dish. Use a paper towel to thoroughly coat the inside of the dish with oil.

Put a stick of butter in the dish and
  • microwave it for 3 minutes on 10% power (soften it, don't liquify it)
Add one raw egg and one cup of milk
  • to the dry ingredients and mix them thoroughly
Add the softened butter
  • to the mixing bowl and mix it some more

Part 3

Notes:
  1. I wait to preheat the oven since that gives the batter more time to rise.
  2. I preheat the (greased!) Pyrex dish with the oven to keep the cornbread from sticking later.

Stick the empty Pyrex dish in the oven and
  • preheat it to 350 degrees
Once the oven has preheated
  • remove the Pyrex dish
  • pour/scrape the batter into the dish
  • pat it down so it's basically flat
Put the dish back in the oven and bake the cornbread
  • at 350 degrees
  • for 40 minutes

Let the cornbread cool for a few minutes.

Notes
  • Typically white flour is used as the second ingredient in cornbread, but I always use spelt, which is pretty similar to ordinary whole wheat flour.
  • If you want to do a gluten-free variant, I have successfully used cornflour (more finely ground than cornmeal). My recollection is that it was somewhat crumblier than normal, but otherwise OK. I have not tried it but I suspect that rice flour would work about the same.
  • I'm not sure what would happen if you just used two cups of cornmeal. I expect the result would just be really crumbly cornbread, but if you're planning to use it as a substitute for turkey stuffing, that's probably OK.
  • The batch I just made came out pretty crumbly anyway.
  • I just eyeballed the sage. It might have been closer to 3 tbsp, I'm not sure. You could substitute dry sage, and if you do you will need a lot less, but I don't know how much exactly.
  • This should go without saying, but you can leave the sage out if you just want regular cornbread.
  • You can use less butter, no butter, or substitute canola oil if you're looking for something a little more healthy.
  • My batter comes out pretty thick, especially with the whole stick of butter. You can just add a little more milk if you find the thick batter to be an issue.
  • Astute readers will note that since I softened the butter in the pyrex dish, I could just have used butter residue to grease the dish. Usually I don't use butter at all (certainly not a whole stick) and just use a little canola oil to grease the dish.
  • As mentioned above, I wait until the batter is ready before I even start to preheat the oven. This gives the cornbread batter more time to rise, which makes a difference in my experience, but seems to be a problem that other people don't have. (Maybe my baking powder is old?)
  • Also as mentioned above, I preheat the greased pyrex dish as well as the oven. This keeps the cornbread from sticking to the dish after it's done and makes it much easier to clean the dish afterwards. However, you have to remember that the dish is still hot when you're pouring the batter into it!
  • If you're feeling ambitious, you could use the butter to saute garlic, chopped celery, chopped onions or other stuffing-like ingredients to add to the batter.
  • No matter how hard I try, I can never cut the cornbread into equal sized pieces. This bothers me more that it should.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/1113214 2016-12-06T02:45:04Z 2016-12-06T02:45:05Z Someday Soon "Chemtrails" May Be Real
Airplane contrails are real, but we know what they are: water vapor and small amounts of other combustion products created by jet engines.  The other pollutants may be somewhat harmful, but they are basically the same thing as produced by any other form of combustion.  So what are chemtrails?  Wikipedia sums it up nicely:

The chemtrail conspiracy theory is the unproven belief that long-lasting trails, so-called "chemtrails", are left in the sky by high-flying aircraft and that they consist of chemical or biological agents deliberately sprayed for sinister purposes undisclosed to the general public. [1]

When I say chemtrails may be real in the future, I'm really only making the claim that high-flying aircraft will be deliberately spraying chemicals that are not simply a normal by-product of jet engine operation.  However, it won't carried out for "sinister" reasons and it won't be done in secret.  Instead it will only be done after a great deal of public discussion and scientific deliberation.

The chemical in question?  Sulfur dioxide. 

In 1991 Mount Pinatubo in the Philippines produced the second largest volcanic eruption of the 20th century.  Besides massive amounts of volcanic ash, Pinatubo injected an estimated 17 million metric tons of sulfur dioxide into the atmosphere.  This caused measurable global cooling of about four-tenths of a degree Celsius from 1991 to 1993.  The effects of the eruption lasted for about three years. [2]

The Mount Pinatubo eruption and its worldwide effect on the climate has raised the possibility that we could intentionally inject sulfur dioxide into the atmosphere to counter-act the effects of global warming.  This kind of large scale intentional intervention is frequently called "geoengineering" although Wikipedia prefers the term "climate engineering". [3]

Seventeen million tons of sulfur dioxide might seem like a lot, but over one one-year period in 2013 and 2014 U.S. airlines alone consumed about 50 million tons of aviation fuel.  If jet aircraft were burning fuel (at altitude) as sulfur-heavy as marine bunker fuel (3 or 4%) then we could be talking about the equivalent of a Mount Pinatubo eruption every couple of years. [4][5][6]

A massive climate engineering project to inject sulfur dioxide into the stratosphere is clearly plausible, but is it a good idea?  We can do it, but should we?  There are two big questions we need to answer.  How useful is it? And: How risky is it?

Earth's surface has warmed by about 0.85 degrees Celsius from 1880 to 2012.  In comparison, the effects of the 1991 Mount Pinatubo eruption lasted for a couple of years and caused global cooling that was estimated to be about 0.4 degrees Celsius.  Pinatubo-scale sulfur dioxide injection -- if sustained -- could offset a large proportion of the global warming that we've already experienced.  Of course sulfur dioxide injection only treats the symptoms of global warming, not the root cause.  But it sure looks like it could provide some real value as an intermediate stop-gap solution while other more permanent solutions are brought online. [7][2]

The data from the Mount Pinatubo eruption tells us that sulfur dioxide in the stratosphere can cause cooling comparable to the global warming we've seen so far.  But it tells us something else as well: The effects of stratospheric sulfur dioxide have a limited lifetime, on the order of just a couple of years.  This is bad in the sense that to be useful, the injection program has to be sustained.  But it is good -- very, very good -- in the sense that it reduces risk.  The program could be scaled up, the impact could be measured, and, if the side-effects are too serious, it can be scaled back down again very rapidly.  This greatly reduces the risk of the undertaking.

Global warming melts polar ice packs and ice sheets, exposing darker water and land underneath.  The reduction in reflectivity results in more of the sun's energy being absorbed by the earth, resulting in warmer temperatures, resulting in more ice melting, resulting in more land and water being exposed, resulting in even warmer temperatures.  This is often incorrectly referred to as a "negative feedback loop".  In reality it is a positive feedback loop with negative consequences.  There are other feedback loops of great concern.  For example, as arctic regions get warmer, methane that has been trapped in permafrost and undersea clathrates is released.  And since methane is a powerful greenhouse gas, it too can cause a positive feedback loop. [8][9]

The offset cooling provided by stratospheric sulfur dioxide injection would interfere with these feedback loops.  This could be particularly valuable in the case of methane, which is a more powerful greenhouse gas than carbon dioxide but which also has a quite short lifetime in the atmosphere.

Stratospheric sulfur dioxide injection is not a panacea.  But what it can do is buy us time.  It can buy us time to replace fossil fuels with cleaner sources of energy that don't release carbon dioxide into the atmosphere. It can buy us time to replace less energy efficient technologies with more efficient ones. It can buy us time to deploy large scale climate engineering projects that do remove carbon dioxide from the atmosphere. It can buy us time to deal with the immediate consequences of global warming and climate change (and we are simply going to have to deal with some of them), and indeed it can buy time for developing economies to, well, develop.  Because everything we need to do is going to be easier if everybody has more money.

Now, to get back to my original point.  I don't know for sure that we're going to be using jet aircraft to inject sulfur dioxide into the atmosphere, and I'm no expert anyway.  But I think we're going to do it.  And if we do, the result will be, quite literally, "chemtrails".

Footnotes
  • [1] https://en.wikipedia.org/wiki/Chemtrail_conspiracy_theory
    • The chemtrail conspiracy theory is the unproven belief that long-lasting trails, so-called "chemtrails", are left in the sky by high-flying aircraft and that they consist of chemical or biological agents deliberately sprayed for sinister purposes undisclosed to the general public.
  • [2] https://en.wikipedia.org/wiki/Mount_Pinatubo
    • The second-largest volcanic eruption of the 20th century, and by far the largest eruption to affect a densely populated area, occurred at Mount Pinatubo on June 15, 1991.
    • The injection of aerosols into the stratosphere is thought to have been the largest since the eruption of Krakatoa in 1883, with a total mass of SO2 of about 17,000,000 t (19,000,000 short tons) being injected—the largest volume ever recorded by modern instruments.
    • This very large stratospheric injection resulted in a reduction in the normal amount of sunlight reaching the Earth's surface by roughly 10% (see figure). This led to a decrease in northern hemisphere average temperatures of 0.5–0.6 °C (0.9–1.1 °F) and a global fall of about 0.4 °C (0.7 °F).
    • The stratospheric cloud from the eruption persisted in the atmosphere for three years after the eruption.
  • [3] https://en.wikipedia.org/wiki/Climate_engineering
    • Some proposed climate engineering methods employ methods that have analogues in natural phenomena such as stratospheric sulfur aerosols and cloud condensation nuclei. As such, studies about the efficacy of these methods can draw on information already available from other research, such as that following the 1991 eruption of Mount Pinatubo.
    • Climate engineering, commonly referred to as geoengineering, also known as climate intervention,[1] is the deliberate and large-scale intervention in the Earth’s climatic system with the aim of limiting adverse climate change.
  • [5] https://en.wikipedia.org/wiki/Fuel_oil
    • Governing bodies (i.e., California, European Union) around the world have established Emission Control Areas (ECA) which limit the maximum sulfur of fuels burned in their ports to limit pollution, reducing the percentage of sulfur and other particulates from 4.5% m/m to as little as .10% as of 2015 inside an ECA. As of 2013 3.5% continued to be permitted outside an ECA, but the International Maritime Organization has planned to lower the sulfur content requirement outside the ECA's to 0,5% m/m.
  • [6] Math
    • How much sulfur dioxide could be produced by burning 16.2 billion gallons of fuel which is 3.5% sulfur by weight?
      • > 16.2 billion gallons
      • > 61.3 billion liters                   ; 3.78541 liters per gallon
      • > 47.5 billion kg                       ; 775 g/l or 0.775 kg/l, the lower figure for jet fuel
      • > 47.5 million tons                     ; 1000kg per metric ton
      • > 1.66 million tons                     ; tons sulfur, assuming 3.5% sulfur by weight
      • > 3.32 million tons                     ; sulfur dioxide is about 1/2 sulfur by mass (32.06 / 64.066)
    • Answer: 3.32 million tons sulfur dioxide
      • This number is at best an estimate.  Aircraft would only burn sulfur heavy fuels at altitude, so not all fuel consumed would produce stratospheric sulfur dioxide.  On the other hand, average jet fuel density is no doubt greater than 775 g/l, and sulfur concentrations of up to 4.5% have been reported for marine bunker fuel.
      • Most importantly, U.S. air travel only accounts for a fraction of total air travel -- a large fraction, no doubt, but probably less than half.
        • > 3.32 million tons                    ; assume U.S. aircraft inject 3.32 million tons of sulfur dioxide
        • > 6.64 million tons                    ; assume European and Asian aircraft can inject a similar amount
        • > 13.3 million tons                    ; assume a 2 year time horizon
  • [7] https://en.wikipedia.org/wiki/Global_warming
    • The global average (land and ocean) surface temperature shows a warming of 0.85 [0.65 to 1.06] °C in the period 1880 to 2012, based on multiple independently produced datasets.
  • [9] https://en.wikipedia.org/wiki/Arctic_methane_emissions
    • Arctic methane release is the release of methane from seas and soils in permafrost regions of the Arctic, due to deglaciation. While a long-term natural process, it is exacerbated by global warming. This results in a positive feedback effect, as methane is itself a powerful greenhouse gas.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/864694 2015-06-03T02:21:29Z 2015-06-03T02:21:30Z Faded Paper Figures: "The Persuaded"
Faded Paper Figures: "The Persuaded": 
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/817307 2015-03-02T07:10:44Z 2015-03-02T07:10:45Z Soldier's Eyes - Jack Savoretti
Soldier's Eyes - Jack Savoretti: 
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/813793 2015-02-20T18:53:01Z 2015-02-20T19:40:14Z Self-sinking capsules and cryobots
Probing Of The Interior Layers Of The Earth With Self-sinking Capsules: http://www.cmp.caltech.edu/refael/league/radioactive-core-earth.pdf

It is shown that self-sinking of a spherical probe in the form of a capsule filled with radionuclides, whose decay heats and melts the rock in its path, deep into the Earth is possible. Information on the formation, structure, and shifts deep in the Earth can be obtained by recording and analyzing acoustic signals from the recrystallization of the rock by the probe.

Similar technology for penetrating ice:

A cryobot or Philberth-probe is a robot that can penetrate water ice. A cryobot uses heat to melt the ice, and gravity to sink downward. The difference between the cryobot and a drill is that the cryobot uses less power than a drill[citation needed] and doesn't pollute the environment.

A cryobot or Philberth-probe is a robot that can penetrate water ice. A cryobot uses heat to melt the ice, and gravity to sink downward.
 
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/806167 2015-02-03T03:57:37Z 2015-02-03T03:57:37Z Two songs by Ringside
Ringside - Wikipedia
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/793630 2015-01-09T07:02:35Z 2015-01-09T07:02:35Z Curiosity wheel damage: The problem and solutions (link)
This is an excellent article about the unexpected wheel damage experience by the Curiosity rover: Curiosity wheel damage: The problem and solutions.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/761753 2014-10-28T23:38:57Z 2014-10-28T23:38:57Z Greg Brockman on figuring out the CTO role at Stripe
Greg Brockman on figuring out the CTO role at Stripe: #define CTO.

This post also had some good stuff about figuring out the VP of Engineering role, since Brockman's counterpart was figuring that out at the same time.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/630852 2013-12-16T07:37:21Z 2013-12-16T07:37:22Z More than you ever wanted to know about Tunnel Boring Machines
More than you ever wanted to know about Tunnel Boring Machines:
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/624811 2013-11-30T06:20:38Z 2013-11-30T06:21:37Z Some thoughts on the Ender's Game movie
From the previews, I expected that Ender's Game the movie had taken major liberties with Ender's Game the novel.  After seeing the movie, I have to say that the liberties taken were surprisingly small.  In fact I'm not sure I've ever seen any movie based on a novel try so hard to be faithful to the source material.

Unfortunately there's a downside -- there were a number of scenes (too many of them) that seemed quite forced precisely because they were trying so hard to cram in themes from the book that there just wasn't time for in a two hour movie.

This is an age old dilemma; What do you want?  A more faithful adaptation of the source material or a better movie?

As a fan of the novel, I have to say I am extremely pleased to see that this movie was made by people who understood and clearly cared about the novel.  On the other hand, I think science fiction cinema really suffered for this choice.  I'm really torn.  I really think they should have taken more liberties with the story and made a better movie.  On the other hand, if they'd done that I'd probably be writing a blog post complaining about the changes.

One last thing to think about -- Ender's Game was originally a short story before it was a novel.  Adapting the short story might have made for a much better movie.  But then you end up with a movie that doesn't address the morality of the xenocide against the Buggers at all, which is an important (and maybe the most important) theme of the novel.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/597176 2013-08-26T04:02:40Z 2013-10-08T17:29:00Z Kent - Sundance Kid
This song turned up on Pandora, on my "The Dandy Warhols" channel.

Kent -Sundance Kid

You might imagine that a band named Kent is British, but in fact they're Swedish, with lyrics predominantly in Swedish.  Sundance Kid, at least, is incredibly catchy, which is kind of a problem when the lyrics are in a language you don't speak.

Lyrics with translation.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393066 2010-04-05T18:01:00Z 2013-10-08T16:46:37Z 404 Error Page Status

Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This is being tracked in bug 482874.

Status

Stalled.

The primary sticking point is the review of docshell/resources/content/notFoundError.xhtml.  The network error pages seem to be poorly owned, and it’s not clear to me who to even ask for review.

Final L10N approval is waiting on error page approval.

There are some relatively minor changes to nsDocShell.cpp since Boris Zbarsky’s approval last year.  I don’t think this is a big deal, but it is C++ and it is in docshell, so it probably needs a further look.

There’s one minor outstanding change needed for RTL approval.

Next Steps

A code reviewer for notFoundError.xhtml needs to come forward and help me out. Any suggestions?

If a code reviewer materializes and provides code review, I’ll take care of the necessary code changes.  I have already tried (without success) to try to run down a code reviewer myself, so I need help from someone to do that.

Notes

  • Just a reminder: I am not longer a Mozilla employee, just a volunteer.  I have time right now to work on this feature.  In another month or two this may not be the case.
  • This page has been translated to Belorussian, see http://pc.de/pages/404-error-page-status-be.

 

]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393067 2010-02-28T19:38:00Z 2013-10-08T16:46:37Z Parameterizing ENTITY definitions for use in XHTML

It sometimes makes sense to parameterize localizable strings.  For example, in the 404 error page, I need to display a string that looks like:

 "Search {site} for {search-phrase}"

If this string were in a .properties file, it might actually look like:

  "Search %S for %P"

However, since I need to reference the string from a non-privileged XHTML page, I have to use an  ENTITY definition in a ".dtd" file.  The "%" character is not legal in an entity definition, at least not as a literal.  And, if we want to parameterize an entity, we have to roll our own parameterization scheme anyway.  For XHTML files, it turns out the simplest way to do this is to embed XHTML markup in the entity definition, for example:

  "Search <span id='site'/> for <span id='searchPhrase'/>"

It's then trivial to look these elements up using JavaScript and inject the appropriate contents at runtime.  This makes for ugly looking strings, but it's dirt simple to implement.

It occurred to me the other day, that since you can embed references to other entities inside, you could replace the SPAN markup with something like:

  "Search &param.site; for &param.searchPhrase;"

A more complete example might look like:

  <!ENTITY httpFileNotFound.searchSiteFor
      "Search &httpFileNotFound.paramSite; for &httpFileNotFound.paramSearchPhrase;">

where the parameter entity definitions look something like:

  <!ENTITY httpFileNotFound.paramSite "<span id='site'/>">
  <!ENTITY httpFileNotFound.paramSearchPhrase "<span id='searchPhrase'/>">

This looks better to me than having the SPAN elements inlined.  It would look even better if we could dispense with the "httpFileNotFound." qualification and just say:

  <!ENTITY httpFileNotFound.searchSiteFor "Search &paramSite; for &paramSearchPhrase;">

or even:

  <!ENTITY httpFileNotFound.searchSiteFor "Search &SITE; for &SEARCH_PHRASE;">

What do you think?]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393070 2009-09-29T23:24:00Z 2013-10-08T16:46:37Z Insertion of tracing code using the C++ preprocessor

A few months back I got this wild idea that you could insert trace-logging code (not jit-tracing, that's something else) into more or less arbitrary C++ by redefining certain C++ keywords as macros. It turns out that you can, in fact, make this work, at least for certain non-trivial cases. Notably, I've been able to compile an instrumented version of Firefox on OS X using this technique.

I'm not sure whether the technique is generally useful, but given our new focus on crashes and hangs, I thought I'd at least mention it. It's probably not particularly useful for crashes but it may have some utility for debugging hangs. I used an earlier version of this code to debug a Firebug hang (Bug 497028), although in that case I already had a good idea where the problem was and I probably could have used the debugger just as effectively.

Here's the latest tracing code, which I have in a file called "quicktrace.h":

#ifdef __cplusplus    #include <stdio.h>    #include <string.h>    static void QuickTrace(const char *fileName, int lineNumber)    {      if (lineNumber % 100 != 0) return;//    if (strstr(fileName, "/js/") != NULL) return;      fprintf(stderr, ";;; %s %d\n", fileName, lineNumber);    }    #define if     if (QuickTrace(__FILE__, __LINE__), 0) {throw 0;} else if    #define for    if (0) {throw 0;} else for    #define switch if (0) {throw 0;} else switch    #define do     if (0) {throw 0;} else do    #endif

I inserted this code globally by adding the following two lines to my .mozconfig:

export CXXFLAGS="-include /Users/cbartley/dev/mozilla-e/src/quicktrace.h"  ac_add_options --enable-cpp-exceptions

Note that the use of exceptions is non-functional -- the only purpose is to suppress warnings about code paths that don't return a value. If you don't care about the warnings you can dispense with the "throw 0" statements in the header file and the --enable-cpp-exceptions in your .mozconfig file.

Notes:

  • This is unquestionably an abuse of the preprocessor. Don't go down this route unless you really think you know what you're doing.
  • If you really want to create an instrumented build, there are better ways to do it, for example Taras Glek's "Pork" tool. Even using shell scripts and sed to create an instrumented source tree might be a better choice.
  • I think the code in this post will probably work on any GCC-based build, but I don't know for sure. As far as I can tell, Visual C++ doesn't have an equivalent of GCC's "-include" flag and I haven't tried this on Windows.
  • To do useful trace logging you really want a global implementation of the "QuickTrace" function. This is somewhat challenging for Firefox because of the way it's linked. I'm leaving this part as an exercise to the reader.
  • The Firefox build actually compiles some tools from source and then uses those tools later on in the build. This means that if you're going to generate logging from the trace function you do not want to send it to stdout.
  • It might make more sense to include the quicktrace.h header file from another very commonly used header file. This might be the only option on Windows anyway.
  • I think it should be pretty easy to build an instrumented Firefox using this approach and the Try server. I haven't actually attempted this yet, though.
  • If you generate output every time the trace function is called, it will be unbearably slow.
  • You can't generate a proper call graph using this approach, although with some post-processing you could probably get a useful approximation of a call graph.
  • There's probably some caveat that I forgot to mention.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393071 2009-09-26T17:46:05Z 2013-10-08T16:46:37Z 404 Error Page Status Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This is being tracked in bug 482874.

Status

On hold for the week while I worked on higher priority Firefox 3.6 stuff.  There are still some L10N issues to address and some general touch up before I can roll a new patch.

Next Steps

Address remaining issues raised in last round of code review and get another patch out.  This will probably have to wait until after the Firefox 3.6 code freeze that's coming up.

Notes
  • The feature is controlled by a pref so it can be turned off.
  • Webmasters who don't want their 404 error pages to be overridden may have to add padding to their 404 error pages.  However, since IE and Google Chrome are already overriding 404 error pages using a similar size test, webmasters already need to do this.
  • We want to provide a way for the user to see the original 404 error page, but that's not in this patch.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393073 2009-09-26T17:45:47Z 2013-10-08T16:46:37Z Troubleshooting Information (AKA about:support) Status Feature Description

Mozilla's support organization has a longstanding request for a Firefox diagnostic page that can provide information about the user's Firefox installation such as which extensions are installed and what prefs have been modified.  The result is the "Troubleshooting Information" page, which is also accessible by typing "about:support" in the location bar.  This feature is fairly constrained for 3.6 since we didn't start work on it until a few days before string freeze. 

Screenshot: https://bug367596.bugzilla.mozilla.org/attachment.cgi?id=400664
Project Page: Firefox/Projects/about:support
Bug: 367596 -  (about:support) [RFE] Create a support info page/lister.

Status

  • Landed!  Troubleshooting Information is now available in both trunk and Firefox 3.6 nightlies.
Next Steps
  •  Bug 518601 -  Troubleshooting Information page should not allow copy-and-paste of the profile directory.  This is a potential security issue and needs to be addressed before Firefox 3.6 ships.
  •  Bug 516616 -  Add an "Installation History" section to about:support.   Nice to have for 3.6.
  •  Bug 516617 -  Add an "Update History" section to about:support.  Nice to have for 3.6.

Notes
  • This is a starting point, not an ending point.  We can extend the functionality for Firefox 3.7.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393075 2009-09-19T16:59:25Z 2013-10-08T16:46:37Z Troubleshooting Information (AKA about:support) Status Feature Description

Mozilla's support organization has a longstanding request for a Firefox diagnostic page that can provide information about the user's Firefox installation such as which extensions are installed and what prefs have been modified.  The result is the "Troubleshooting Information" page, which is also accessible by typing "about:support" in the location bar.  This feature is fairly constrained for 3.6 since we didn't start work on it until a few days before string freeze. 

Screenshot: https://bug367596.bugzilla.mozilla.org/attachment.cgi?id=400664
Project Page: Firefox/Projects/about:support
Bug: 367596 -  (about:support) [RFE] Create a support info page/lister.

Status

  • The implementation went through seven or so revisions in the days just prior to Firefox 3.6 string freeze.
  • Current page design includes "Application Basics" (App name, version, profile directory, etc.), "Installed Extensions", and "Modified Preferences".
  • Strings for all of the above were landed on trunk and the 1.9.2 (Firefox 3.6) branch in a strings-only patch.  This patch also included strings for "Installation History" and "Update History" sections, in hopes that we can get those features in as well for Firefox 3.6 (Bugs 516616 and 516617).
  • Discovered a performance bug in the FUEL preferences API (Bug 517312).  I've rewritten part of this code to use the lower-level preference API, which avoids the bug and is just much speedier all around.
Next Steps

Address the remaining issues from the last round of code review, and see if we can get the main patch landed.

Notes
  • This is a starting point, not an ending point.  We can extend the functionality for Firefox 3.7.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393077 2009-09-19T16:58:25Z 2013-10-08T16:46:37Z 404 Error Page Status Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This is being tracked in bug 482874.

Status

Final work was delayed while I worked on about:support/Troubleshooting Information.  Since then I've rewritten the Places part of the patch in JavaScript as requested in the most recent code review.  I still have some L10N issues to address and some general touch up.

Next Steps

    Pinky: "Gee Brain, what do you want to do next week?"
    The Brain: "The same thing we do every week, Pinky—try to land this patch!"

Notes
  • The feature is controlled by a pref so it can be turned off.
  • Webmasters who don't want their 404 error pages to be overridden may have to add padding to their 404 error pages.  However, since IE and Google Chrome are already overriding 404 error pages using a similar size test, webmasters already need to do this.
  • We want to provide a way for the user to see the original 404 error page, but that's not in this patch.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393079 2009-09-08T16:57:07Z 2013-10-08T16:46:37Z 404 Error Page Status Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This is being tracked in bug 482874.

Status

Found and worked around an SQLite bug (Bug 514291).  Did some performance analysisis, suggestions query tweaking, etc.

Latest patch is attached to the bug and review is requested.

A try server build is available
Next Steps
  • If the reviews come back positive, get it landed.
  • If the reviews raise issues, address them ASAP.
Notes
  • The feature is controlled by a pref so it can be turned off.
  • Webmasters who don't want their 404 error pages to be overridden may have to add padding to their 404 error pages.  However, since IE and Google Chrome are already overriding 404 error pages using a similar size test, webmasters already need to do this.
  • We want to provide a way for the user to see the original 404 error page, but that's not in this patch.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393081 2009-08-30T16:05:40Z 2013-10-08T16:46:37Z 404 Error Page Status Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This feature is being tracked in bug 482874.

Status

Almost ready to go.  Notable progress this week:
  • The docshell changes have r+ from bz with just a few changes, most of which I've already made.
  • Axel has provided some feedback on localization.
  • Johnath has provided feedback on the XHTML.
  • Marco has provided some feedback about the Places changes.  Notably he points out that query performance could be an issue on large databases.
  • David Dahl has provided me with a "max_places" test database.  The bad news is the URL suggestions query can take ~30 seconds to run against this database.  The good news is that a very simple length-check guard in the query can make a big difference.
Next Steps
  • Address  the localization changes Axel raised, and then get formal L10N review.
  • Address the performance issues in the URL suggestions query.  (Probably just a LIMIT-based approach for the initial patch)
  • Get a formal review of Places changes (Dietrich did an informal review a few weeks ago).
  • Get a final UI review.
  • Make sure all the parts of the patch have r+ from the various parties.
  • Land the thing.
]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393082 2009-08-24T02:28:00Z 2013-10-08T16:46:37Z 404 Error Page Status

Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.

Status

The newest patch is almost ready for review.

This patch is mostly focused on making the code right, but there are a couple of functional changes:
  • The Firefox 404 error page is now pref-controlled.  There's a pref to turn it on and off, and a pref to limit which server-supplied 404 pages we'll override.  Currently we will only override server 404 pages of 512 bytes or less.
  • We're dropping the background image for now.


Next Steps

  • Complete the patch and get it in for review.
  • Blog about it.

Notes

  • This feature is being tracked by Bug 482874 - Provide a friendlier/more useful alternative when the user encounters a 404 error page.

Edit: Added Notes section.

]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393083 2009-08-03T06:23:42Z 2013-10-08T16:46:37Z Building Firefox under Eclipse/CDT, or I gotta have my identifier completion Introduction

I've been using Eclipse as a C++ IDE for Mozilla development, which apparently puts me in a distinct minority among Mozilla developers.  Since I'm neither an Emacs nor a Vim user, I needed an alternative.  Frankly, Eclipse is not all that great as an editor, but having an IDE with halfway decent identifier completion and code browsing goes a long ways towards making up the difference.  It's not all that hard to set up an Eclipse project for mozilla-central (i.e. Firefox), but it's not exactly obvious either.  The purpose of this blog post is to explain one way to do so.

Disclaimer and Caveats

I'm not claiming that this is the optimal way to set up mozilla-central under Eclipse.  In fact, I'm not even claiming that it's right.  All I'm saying is that this approach is relatively simple and seems to actually work.  For expediency, these instructions make some assumptions.  First, they assume that you are developing under OS X.  I believe that they are easily adaptable to Linux and Windows+cygwin (or Windows+mozilla-build), but I don't know for sure.  Second, the instructions assume that you are already able to build mozilla-central from the command-line.  If you don't already know how to do that, then you don't want to start here.

Disclaimer #2

As a final disclaimer, this approach is relatively new to me.  I've mostly been using an older version of the CDT under Eclipse 3.0 and I'm not even going to mention what I had to do to get indexing to work properly.  This new approach is a lot simpler, and I'm reasonably confident that it will work well for sustained use.

Screenshots

I originally wrote these instructions in text-only form.  I've since added some marked-up screenshots for many of the steps.  These screenshots are attached at the end of the post.

Acknowledgements

And last, but not least, thanks to Mike "firetoad" Collins for trying out a slightly earlier version of these instructions.

Now the instructions.

Install Eclipse/CDT

  • Obviously you need Eclipse.  I will offer some advice, but I'm largely assuming that you can figure this part out for yourself.  These instructions assume that you are using Eclipse 3.5, better known as Galileo.  You need Eclipse with the CDT, the "C/C++ Development Tooling".  The easiest way to get this is to go to http://www.eclipse.org/downloads/ and download the pre-packaged Eclipse IDE for C/C++ Developers, which is what I've done.  I have an existing Eclipse installation, but rather than trying to upgrade, I've just got them side-by-side, which doesn't seem to be a problem as long as you set up separate workspaces for them.
  • By default Eclipse runs with a 256M heap, which is too small to reliably index the mozilla-central tree.  I've been running with 512M without any problem, although I've not really pushed it very hard yet.  There ought to be a way to configure the Eclipse heap size through the UI, but if there is, I haven't found it yet.  In the meantime, I've been launching it from the command-line, like so:
    • /Applications/eclipse-galileo/Eclipse-galileo.app/Contents/MacOS/eclipse -vmargs -Xmx512M &
  • Your path will differ, of course.  Also note that the path above is a little bit unusual because I've renamed the actual Eclipse application so it won't conflict with my other (older) Eclipse installation.
  • By default, Eclipse engages "Scalability Mode" for large files, which means it turns off most of the cool features.  I've increased the threshold to 15,000 lines (nsDocShell.cpp is about 11,000 lines of code, for example).  To change this setting, choose Eclipse >> Preferences, then type "scalability" into the search box.

Create a new project

  • Start Eclipse (but see the note above about making sure you're running it with enough memory).
  • Ctrl-click in the Project Explorer, choose New >> C++ Project
    • this will open the C++ Project dialog
  • In the C++ Project dialog,
    • In the Project name text field, enter the project name
      • I'm using firefox-eclipse
    • Uncheck Use default location
    • In the Location text field, enter the full path for the project source code.
      • I'm using /Users/cbartley/Dev/firefox-eclipse/src
    • Under Project type, choose Makefile project >> Empty Project
      • Note that other project options can cause Eclipse to create a default makefile, which you don't want.
    • Click Finish
  • You should now see the project firefox-eclipse in the Project Explorer.

Configure the project

  • Ctrl-click on firefox-eclipse in the Project Explorer, and choose Properties.
    • This will bring up the Properties for firefox-eclipse dialog.
  • In the Properties for firefox-eclipse dialog,
    • Select the Resource panel
      • Note the path after Location in the Resource panel.
        • This is the same path you entered a few steps above, but if you're like me, you've probably forgotten it already.
    • Select the C/C++ Build panel
      • Select Builder Settings inside the C/C++ Build panel
        • Uncheck Use default build command
        • In the Build command text field, enter:
          • bash -l -c 'make -f client.mk $0 $1' -b
            • this hairy looking command invokes a bash shell which in turn invokes the actual make command.  The dash-small-L option tells it to behave like a login shell.  This sidesteps some environment problems that I've run into when running make directly from Eclipse.  I won't elaborate on the $0, $1, or -b, except to tell you to be careful that you put the quotes in the right place!
        • In the Build directory text field, enter the path to the project source
          • I'm using /Users/cbartley/Dev/firefox-eclipse/src
            • Note that this the same path found in the Location text field on the Resource panel. I usually just Cmd-C copy it from there any more, to reduce the possibility of a typo
      • Select the Behavior tab inside the C/C++ Build panel
        • In the Build (Incremental build) text field, delete all and just leave the field empty.
    • Expand the C/C++ Build tab to show the tabs for its subpanels.
      • Select the Settings sub-panel
        • Select the Mach-O parser under Binary Parsers
    • Expand the C/C++ General tab to show the tabs for its sub-panels
      • Select the Index sub-panel
        • Check Enable project specific settings
        • Under Select indexer
          • Choose Full C/C++ Indexer (complete parse)
          • Check Index unused headers as C++ files
    • Click Apply
  • The project is now set up, but there's no source code yet.

Get the Source Code

  • Eclipse has created the directory for the project, but it's empty.  I've created my project at /Users/cbartley/Dev/firefox-eclipse/src.  Following standard convention, I want to check the mozilla-central project out to a directory named src, the same one as above, in fact.  The problem is that Mercurial complains if the directory already exists.  Checking the source out first doesn't help, because then Eclipse complains that the directory already exists.  I'm going to check mozilla-central out to a different location and then manually merge them.  Note that Eclipse has created some files inside the src directory that we want to preserve.
  • cd into the project directory
    • I do cd /Users/cbartley/Dev/firefox-eclipse/src
  • cd out one level
    • cd ..
      • I'm now in /Users/cbartley/Dev/firefox-eclipse
  • Now type the following commands
    • hg clone http://hg.mozilla.org/mozilla-central src-aside  # get the source code
    • mv src-aside/.h* src                                      # move mercurial files into src
    • mv src-aside/* src                                        # move regular files into src
    • rmdir src-aside                                           # remove the now superfluous src-aside directory

Set up the .mozconfig file

  • Create a basic .mozconfig file for the project; I'm assuming you've already got a usable one some place.
    • For example, I do cp ../mozilla/src/.mozconfig src
  • Make sure the "-s" option is NOT on for make
    • Normally make displays the complete command line for building each file.  When invoked with the "-s" option, however, it only displays the name of the file being built.  This is a problem for building under Eclipse, since Eclipse parses the build output to figure out where the header files are.
    • In my existing .mozconfig, I have the following line:
      • mk_add_options MOZ_MAKE_FLAGS="-s -j4"        # before
    •     I delete the "-s", leaving
      • mk_add_options MOZ_MAKE_FLAGS="-j4"           # after

    Build the project under Eclipse

    • Make sure that firefox-eclipse is selected in the Project Explorer.
    • Select the Console tab in the bottom pane so you can see the build output
    • Select Project >> Build Project from the menu
    • Wait

    Build the project under Eclipse, again

    • We want to do Clean rebuild under Eclipse; This extra step seems to be necessary for Eclipse to find the IDL-generated header files.  Don't ask me why.
    • Make sure that firefox-eclipse is selected in the Project Explorer
    • Select the Console tab in the bottom pane
    • Select Project >> Clean... from the menu
      • Make sure that Start a build immediately is checked
      • Click OK
    • Wait

    Make sure indexing is started

    • After the build completes Eclipse should start Indexing automatically.  A progress indicator should appear in the status bar on the lower right.
    • If indexing hasn't started, you can invoke it manually by:
      • Ctrl-click on firefox-eclipse in the Project Explorer, and choose Index >> Rebuild
    • Indexing takes something like four hours on my machine.  Eclipse remains usable in the meantime, you just won't have access to the more advanced code-completion and browsing features.

    After Indexing is completed

    • Cmd-Shift-R is kind of like the Awesome Bar for source files.
    • Cmd-Shift-T is kind of like the Awesome Bar for types, functions, etc.
    • Ctrl-Space invokes identifier completion.
    • Just typing identifier. or identifier-> will show a list of member variables and functions if you pause for a second.
    • F3 will take you to the declaration of an identifier
    • F3 over a #include will open the header file.
    • Hovering over a macro will show you its expansion.
    • Actually, many of the features will work in full (e.g. Cmd-Shift-R) or in part (e.g. identifier completion) before indexing is completed.

    ]]>
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393084 2009-07-15T01:09:57Z 2013-10-08T16:46:37Z How do you surf the firehose? How do you surf the firehose?

    I'm not sure that "drinking from the firehose" is the right metaphor for Mozilla.  I still claim it's more like having people constantly shooting you with water pistols from every different direction.

    There's not much method to my madness, but here's how I try to keep up with the firehose:

    I'm subscribed to four newsgroups in Thunderbird:
    • mozilla.dev.planning
    • mozilla.dev.apps.firefox
    • mozilla.dev.platform
    • mozilla.dev.tree-management
    I don't actually read them any more, with the exception of mozilla.dev.planning which I now have set up to deliver straight to my main Mozilla email account.  (Because Mike Beltzner told me to.  "There's hardly any traffic on that list anyway" I recall him saying.)  Since conventional email is hardly used at Mozilla, this seems to work OK.

    I also have bugzilla notifications going to my main email account.  I don't know what I was thinking.

    I'm subscribed by email (my personal email) to the Firebug Group on Google Groups.  In hindsight this was a mistake, but I haven't fixed it yet.  I'm mostly skimming the thread subjects at this point, but I at least have some inkling of what's going on there.  I check this a couple of times a day when I check my personal email.

    I read planet.mozilla.com every couple of days.  Anymore I mostly skim it, reading just the occasional post.  I really should try to check it every day, since I find it pretty useful to the extent that I do read it.

    I peruse reddit and news.ycombinator.com (Hacker News) several times a day, and slashdot every now and then.  If a blog post doesn't make it to reddit or Hacker News, I probably won't see it.  I don't feel like I'm missing a lot of stuff by doing this, but on the other hand I do feel like I have to wade through a lot of crap to find the stuff I do want to read.  (Ha, Sturgeons Law: 90% of everything is crap.  Curtis's corollary to Sturgeons Law: On the Internet, 90% of the rest is crap too.)

    I don't do twitter, largely because it seems to scratch an itch I don't have.

    I have three hours or so of phone meetings every week, not counting the weekly Mozilla.org meeting.  I take these meetings fairly seriously since I'm working remotely.  I don't take notes, but I do attempt to actually pay attention.  I'm prone to zoning out even in meetings I'm physically in, and it's worse when I'm dialed in remotely.  My secret weapon?  I usually surf pictures on Flickr while I'm on the phone.  This works surprisingly well since it doesn't require any major effort from the verbal part of my brain.  In the future I should probably take notes at least some of the time.  Like any time Mike Beltzner is saying something important.

    I don't use a feed reader.  In the past I've used Bloglines and Google Reader.  I abandoned Google Reader because I always hated its infinite scrolling model.  (I don't remember why I hated it; I like infinite scrolling in other cases) .  I do have a master list of blogs I read.  Despite the dozens of blogs on this list, I only read a few regularly these days, and none of them are technical blogs anymore.

    And of course Mozilla lives and breathes IRC.  I'm using Colloquy as my IRC client.  It's not wonderful, but it works OK.

    Tag, you're it.

    Robcee's original post (complete with standard questions which I totally ignored) was designed to be viral.  I can't beg off this part, since I put him up to it.  Seriously, though.  I'm only asking because I think I might learn something.
    ]]>
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393085 2009-05-08T21:36:35Z 2013-10-08T16:46:37Z What should Mozilla look for in an automated review system? In last week's All Hands when talking about possible process improvements, Mike Shaver asked if there was any interest in some sort of automated review system.  Google's Mondrian is probably the most well known review system.  Review Board seems to be the best known open source review system.  I've never used Review Board, but as an ex-Googler, I have used Mondrian.

    I think Mozilla would really benefit from a good review system.  My review system experience is limited to Mondrian, which is, of course, proprietary to Google.  I don't know how Review Board and the other open source systems measure up, and, as I'm about to get to, I think Mondrian itself has some notable deficiencies.  My overall point is that if we decide to adopt a review system, we should choose carefully.  I'll try to offer some guidance about how to do that.

    Quick Overview of Mondrian

    Mondrian has a dashboard view for outstanding changelists but to me the essence of Mondrian is its UI for reviewing a single changelist, and that's what I'm going to talk about here.  At the highest level you have a list of files modified in the changelist.  You can click on each file and get a diff of that file against the baseline in a side-by-side view.  You can easily see which lines were added, modified, or deleted.  OK, everybody is familiar with side-by-side diffs these days.  Here's the cool part though: A reviewer can click on any line and leave an inline note or comment.  Afterwards, the reviewee can go through the diff and respond to the comments.  The way this often works in practice is the reviewer leaves an instruction in the comment and the reviewee responds by making the requested change and replying to the comment with "Done."  In fact, there's a "Done" button as well as a "Reply" button on the comment just to make this last part easy.

    Once the reviewee has updated the changelist to the reviewer's satisfaction, the reviewer will approve the change and the reviewee can submit it to the source control system.

    What's not to like?

    Mondrian's review UI is great for simple line-by-line reviews, things like:
    • "You don't need to check for null here."
    • "You should check for an error here."
    • "Fix indentation" 
    Sometimes this level of line-by-line scrutiny is really useful.  For example, exception-free C++ code often requires the programmer to check virtually every single function call for errors.  This is hard for even the best programmers to get 100% right.  But let's be clear here.  What Mondrian is really good for, all the time, is line-by-line nitpicking of code.  And frankly, line-by-line nitpicking of code is already pretty easy without some fancy tool like Mondrian to make it extra easy.

    Mondrian's review comment system really seemed to encourage a style where there was a one-way flow of instructions from the reviewer to the reviewee: "Do this.  Do this.  Do this." and the reviewee replies with "Done.  Done.  Done."  Sometimes this is appropriate, but oftentimes it isn't.

    Of course the reviewee always has the option of clicking "Reply" instead of "Done".  You could have a whole thread of comments and replies if you wanted to.  But given the limitations of the UI, that would be kind of like communicating with short messages written on post-it notes.  And not regular sized post-it notes either, but rather those extra tiny ones.

    So Mondrian not only encouraged a review focus on individiual lines, it also tended to encourage a one-way flow of information from reviewer to reviewee which could easily degrade into a one-way flow of commands.

    What would I want in a "good" review system?

    It may seem like I'm arguing that a review system should actively discourage line-by-line review, and that's not actually the case.  I think that review style is often useful, and pretty much any review system, good or bad, will support it.

    There are really two fundamental things that I do want out of a review system.
    • The system should go out of its way to support a bi-directional flow of information between the reviewer and the reviewee.  In the extreme, it should provide a means of carrying on an actual conversation.  This could be supported within the review system itself, but even a simple means of escalating a conversation to email (or even IRC) might be a big help.
    • The system should support higher level discussions about the code under review.  Actually I think it should go so far as to encourage this kind of information flow.  You can sort of do this with Mondrian, but you are usually better off just going to email.

    Some general guidance

    I've put a fair amount of thought into how I'd like an ideal review system to work.  Mostly I've been thinking in terms of concrete features, but that's of limited utility if what you want to do is choose between existing review systems rather than writing a review system yourself.

    So I'm trying to figure out some general guidance.  I think what I'm trying to say in this post more than anything is that affordances matter.  Mondrian, for example, seems to afford the line-by-line scrutinizing, nitpicking approach to code reviews.  It also seems to afford a model where the reviewer simply gives instructions to the reviewee and the reviewee just automatically carries them out.

    Mondrian offers some minimal affordance for discussing (as opposed to simply commenting on) a particular line of code, but it could do a lot more.  Notably it does not seem to offer any real affordance for discussion at the design level, which has always seemed to me like a serious omission.

    A simple concrete example

    Here's one simple way that Mondrian could be improved which I hope will illustrate my point about affordances.  As described above, Mondrian comments have two buttons, "Reply" and "Done".  It could offer other choices as well, so you might have:

    "Reply", "Done", "Done w/comment", "Defend", etc.

    These latter two functions could easily be done with "Reply", but if you give them their own buttons, you explicitly tell the reviewee that there are other options that can be considered here.  In particular, in this case, they give the reviewee permission to say "I had a reason for doing this the way I did, would you please consider my reasoning?".

    My larger point here is that a review system's UI strongly shapes the code review process, and it can shape it in good ways or bad ways.  As a result, we want to think not just about what we want out of a code review system, but also what we want for the code review process itself.

    Super-short Summary

    1. A good review system needs to support two-way information flow, and it should probably go so far as to actively encourage it.
    2. A good review system should support review at a higher level than simply line-by-line, and it should probably go so far as to actively encourage it.
    3. Affordances matter.

    Notes

    • This post describes Mondrian as of about nine months ago when I last used it.  It may have received a major upgrade in that time, I don't know.  Mondrian may also have had useful features that I didn't know about -- not all of its features had good discoverability.
    • Probably the most common code review case at Google is a reviewer and reviewee who are peers, have known each other for months if not longer, and who sit near each other, often in the same room.  Mondrian's limitations are a lot less important in this scenario, since a lot of out-of-band communication can happen the old fashioned way, by talking face-to-face.
    • In some circumstances you can have reviewers and reviewees who work in different locations and who have never met in person.  This is not uncommon at Google, and it is in fact very common at Mozilla.  In this scenario, the way the review system works becomes much more important.  
    • The obvious way to get around Mondrian limitations is to fall back to email.  I ultimately started emailing concerned parties requesting informal reviews before requesting formal reviews through Mondrian.  Mondrian can still be used in this case -- it can see pretty much any outstanding changelist.  Nevertheless, by making an informal request in email, you could get a high-level review of a changelist without getting mired in low-level details.  And since this was through email rather than Mondrian comments, you could hold an actual conversation about it.  It turned out my team's superstar intern had independently started doing the same thing.  I jokingly referred to this as "doing a review before the review".
    • This post might lead you to believe that I'm a Mondrian-hater.  Actually, I think Mondrian is a very good tool, I just feel like they quit working on it before they were done.
    ]]>
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393086 2009-04-04T16:47:00Z 2015-04-06T22:04:12Z The Firebug tabs-on-top UI refactoring has landed

    Note: Since I was getting lots of comment spam I've closed comments here.  If you have any comments or suggestions, feel free to file them on the Firebug Issues page or drop us a line in the Firebug discussion group in the Tabs on Top thread.

    The Firebug tabs-on-top UI refactoring has finally landed in Firebug 1.4 alpha.  You can download it as firebug-1.4.0a17.xpi (or later) from http://getfirebug.com/releases/firebug/1.4.  This change is pretty close to the one I described and prototyped in February (see Improving the Firebug UI) and which in turn follows my original proposal from late last year (see Firebug UI: Tabs on the Top).  These blog posts describe the reasoning behind the change so I won't rehash it in full.

    The old layout had a toolbar at the top of the Firebug UI, and a tabbed "panel bar" below it.  Sometimes a tabbed "side panel bar" would show up to the right of the main panel bar.  This change essentially takes the toolbar and the side panel bar (when it appears) and puts them inside the active panel.  This puts the panel tabs at the top of the Firebug UI (hence the name "tabs-on-top") and panel specific controls underneath.  Several controls that are not panel-specific (the main Firebug menu, the search box, and the detach window and close window buttons) have been moved to the tab strip, so they are effectively in the same location as before the change.  The idea is that UI elements that are specific to a panel look like they are inside that panel.  This description probably makes the change sound more complicated that it really is.  A screenshot will better communicate how it looks.


    Firebug UI with the tabs-on-top layout.  The Net panel is selected.

    Even the screenshot is a poor substitute for actually downloading the extension and trying it out, so let me encourage you to do that.

    There are still some outstanding issues, before this change can really be called complete.
    • The location of the main panel Options menu is less than ideal.  I think the right thing to do here is to merge the panel options with the menus on the panel tabs, but I haven't had time to prototype it yet.
    • The Inspect button really belongs on the top toolbar.  I've held off on relocating it because I think it will look too much like the label of a panel tab.  There are a couple of options here.  We could change the styling to make it look more like a button.  We could replace the label with an icon, or we could change the tab styling so even unselected tabs show a tab outline.  Probably any of these solutions would work, we just need to figure out what the best one is.
    • The position of the debugger buttons can jump drastically under some conditions.  Normally the debugger buttons only appear when the Script panel is active.  However, if JavaScript execution is paused, the debugger buttons always appear, regardless of the tab.  With the new layout this means their position can jump if you switch to another tab which doesn't have a side panel bar.  I think it might make sense to move them to the left side of the toolbar, which will eliminate this problem.  This seems like it might be a fairly controversial change, so we really need to explore it separately.
    ]]>
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393087 2009-03-06T18:47:00Z 2013-10-08T16:46:37Z What are "futures", and why should you care?

    Motivation

    Last week we were discussing, among many other things, ways to speed up Firefox during startup.  One obvious option was to move more of our I/O off of the main thread.  This in turn involves making more code asynchronous, and asynchronous code is simply harder to manage.  Mike Shaver mentioned something about "futures" as a possible way to handle the extra complexity, and then the discussion moved on to something else.

    I'm not exactly an expert, but I've not just used futures, I've written my own implementations in JavaScript and even Object Pascal (in hindsight I'm not sure the latter was a good idea, but it was certainly an interesting exercise).  Futures seem esoteric, but they really shouldn't be -- the idea is really quite simple.  In this post I'll try to explain what futures are and how they can be used to make asynchronous programming easier.


    What exactly is a future anyway?

    In the simplest form, a future works like an IOU.  I can't give you the money you've asked for right now, but I can give you this IOU.  At some point in the future, you can give me the IOU and I'll give you the money -- if I have it.  If I don't have it yet, then you can wait until I do.  I get paid on Friday.

    Alternatively there's the dry cleaner metaphor.  You drop your clothes off on Monday and the clerk gives you a ticket that you can use later to reclaim your clothes after they've been cleaned.  The clothes will be ready on Tuesday morning, but if you show up too early, you'll have to wait.  On the other hand, if there's no hurry, you can just do other stuff on Tuesday and show up on Wednesday with a reasonable expectation that they'll be ready when you arrive.  You'll just hand your ticket over, collect your clothes, and be on your way.

    A future is similar to the IOU (or the dry cleaning ticket).  It gives you a way to represent the result of a computation that has not yet completed, and it allows you to access that result once it becomes available.  So you can call a function which starts some asynchronous process but doesn't wait for it to finish.  Nevertheless the function can return you a useful result: a future which can be used to claim the real result later.

    Of course if you ask for the result too soon, you'll have to wait.  On the other hand, if the result becomes available before you want it, then it will wait for you.


    A simple example

    Here's an example of what this might look like in pseudo-JavaScript:

    function doStuff() {
      var cleanClothesFuture = dryCleaner.dropOff(dirtyClothes);
      runErrands();
      work();
      eat();
      watchTv();
      sleep();
      var cleanClothes = cleanClothesFuture.get();  // block if the result is not ready yet
    }

    Compare this to the traditional way we'd handled this in JavaScript, using a callback:

    var cleanClothes = null;

    function doStuff() {
      dryCleaner.dropOff(dirtyClothes, function (clothes) { cleanClothes = clothes; });
      runErrands();
      work();
      eat();
      watchTv();
      sleep();
    }

    These examples are not one hundred percent semantically identical, but they should be close enough to illustrate the point.  I contend that the first function is easier to write, easier to read, and easier to reason about.  I also contend that the difference isn't enough to get excited about.  It's when things get more complicated that futures become really useful.


    A more complicated example

    Imagine that I have a web page that sends an AJAX request to a server and then displays the results in an IFRAME -- and furthermore does it automatically on page load.  I have to wait for both the AJAX request to return data and for the IFRAME to finish loading -- only then can I display the results.  This can be done fairly simply using callbacks:

    function showData(dataUrl, iframeUrl) {
      var data = null;
      var iframeBody = null;
     
    tryToShowData { if (data && iframeBody) { showDataInIframe(data, iframeBody); } }
      requestDataFromServer(dataUrl, function (response) {data = response.data;  tryToShowData() });
      insertIframeBody(iframeUrl, function (iframeDoc) {iframeBody = iframeDoc.body; tryToShowData() });
    }

    Now, imagine the same thing done with futures:

    function showData(dataUrl, iframeUrl) {
      var dataFuture = requestDataFromServer(dataUrl);
      var iframeBodyFuture = insertIframeBody(iframeUrl);
      showDataInIframe(dataFuture.get(), iframeBodyFuture.get());
    }

    Again, these two examples are not semantically equivalent -- notably there's no blocking in the first example.  Now let's imagine that we had a way to turn an ordinary function into a new function which takes futures as arguments and which returns a future in turn.  As soon as all the future arguments became available, the base function would be called automatically -- and once the base function completed, its result would be accessible through the future returned earlier.  I'll call this capability "callAsap": call a function as soon as possible after all of its future arguments become available.  Using callAsap(), the previous example might be rewritten as:

    function showData(dataUrl, iframeUrl) {
      var dataFuture = requestDataFromServer(dataUrl);
      var iframeBodyFuture = insertIframeBody(iframeUrl);
      showDataInIframe.callAsap(dataFuture, iframeBodyFuture);
    }

    In this case we don't care about the return value for showDataInFrame.  This example is much closer in behavior to the earlier callback-based example.  In fact, the callAsap() method would be implemented with callbacks underneath, but they would all be nicely abstracted away under the hood.

    One of the nice things about callAsap() is that it can nicely handle cases where you are waiting on more than one future.  Imagine that you've asynchronously requested data from two different servers:

    function showData(dataUrl1, dataUrl2, iframeUrl) {
        var dataFuture1 = requestDataFromServer(dataUrl1);
        var dataFuture2 = requestDataFromServer(dataUrl2);
        showDataInIframe.callAsap(dataFuture1, dataFuture2, iframeBodyFuture);
    }

    This segues nicely into the next topic: Arrays of futures.


    Arrays of futures

    Imagine if you have not one, or two, or three futures, but rather an arbitrary number of futures.  What we'd really like to have is a way to take an array of futures and produce from it a single future for an array of concrete values.  Something like:

    function showData(dataUrlArray, iframeUrls) {

      // The "dataFutureArray" is a concrete array of futures.
      var dataFutureArray = requestDataFromServers(dataUrlArray);

      // The "dataArrayFuture" is a future for a concrete array of concrete values.
      var dataArrayFuture = Future.createArrayFuture(dataFutureArray);

      showDataInIframe.callAsap(dataArrayFuture, iframeBodyFuture);
    }

    What this example might look like rewritten in callback style is left as an exercise to the reader.


    An advanced example

    OK, now for a more elaborate example.  Imagine a function which retrieves the first page of Google search results for a particular query, and then goes through and re-orders the results based on its own ranking system.  Furthermore, imagine that this ranking is computed based on the contents of each web page.  We'll need to to make requests to many different servers for many different web pages.  This will be fastest if we issue all the requests at once.

    function search(query) {

      // Take a concrete search result and return a future to a
      // [searchResult, ranking] pair.

      function requestWebPageAndRanking(searchResult) {
        var webPageFuture = requestWebPage(searchResult.url);
        var rankingFuture = computeRankingFromContent.callAsap(webPageFuture);
        return Future.createArrayFuture([webPageFuture, relevanceFuture]);

      }

      // Take a concrete array of search results and return a future to
      // an array of [searchResult, ranking] pairs, sorted by ranking.

      function requestSearchResultsSortedByRanking(searchResultArray) {

        var
    rankingArrayFuture = Future.createArrayFuture(
          [requestWebPageAndRanking(searchResult) for (searchResult in searchResultArray)]
        );
        return sortArraysByKeyIndex.callAsap(
    rankingArrayFuture, 1);
      }

      // Request search results, re-rank them, and then display them.
      var searchResultArrayFuture = requestGoogleResults(query);
      var sortedRankingArrayFuture =
         
    requestSearchResultsSortedByRanking.callAsap(searchResultArrayFuture);
      showSearchResults
    .callAsap(sortedRankingArrayFuture);
    }

    In all fairness, this is not as simple as a synchronous blocking implementation.  Keeping your arrays of futures and futures of arrays straight is a little bit taxing.  Imagine what a callback model might look like, however, with callbacks inside callbacks.  One advantage of using futures is that you can often write traditional blocking code and then in straightforward fashion translate that code into asynchronous code using futures.

     

    Notes, in no particular order

    • The examples may look like JavaScript, but they are, in fact, pseudo-code.  The implementation of some of the helper methods, psuedo-code or otherwise, are left to the imagination.
    • I have completely glossed over error handling, including such interesting topics as exception tunneling, fallback values (nulls, empty arrays, NullObjects), not to mention timeouts and retries.  If this sound scary, it's because error handling in any kind of async code is a difficult topic.  Futures don't make the situation any worse, and might make it better.
    • The name "callAsap" is my invention, although I'm certain the underlying idea has been invented independently many times.  Also note that callAsap() and Future.createArrayFuture() are fundamentally quite similar.
    • Java futures (java.util.concurrent.future) use a blocking get() method like the one in the first example.  I don't actually know how you could do a blocking get in conventional single-threaded JavaScript, which is the whole genesis of callAsap().  Practical JavaScript futures need to be "listenable futures" which raise events when they resolve.  The methods callAsap() and Future.createArrayFuture() can then be implemented using this capability.  Client code can then use these methods to avoid writing explicit callbacks.
    • The re-ranking Google search results example is contrived, but it's based on a similar project I did a few years ago.  In that project I used callbacks, and it was quite painful.
    ]]>
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393088 2009-02-17T18:02:00Z 2015-04-06T22:01:25Z Improving the Firebug UI Refactoring the Firebug UI

    My main project for the last few weeks has been doing some Firebug UI work that I’ve been thinking about for a while.  Since I’ve just been moving UI elements around (to make the layout more logical), I’ve been referring to this effort as “UI refactoring”.

    So here’s the deal.  It’s not enough that I think this redesign is an improvement, or even that the Firebug working group has been supportive.  We need to know what you, the Firebug user, think.

    Before I get started, I should let you know that I’ve been working on the Firebug 1.4 branch, which is still undergoing active development.  The changes I’m proposing are fairly simple in the sense that they are really only a reorganization of existing UI elements.  However, if you have not yet seen 1.4, be aware that there are other significant UI changes, notably John J. Barton’s work on the activation UI, kpdecker’s multi-file search capability, and Hans Hillen’s work on accessibility.

    What’s wrong with the current Firebug UI?

    The basic UI layout for Firebug today consists of a toolbar across the top, a main “panel bar” underneath, and an optional “side panel bar” to the right.  Each panel bar consists of a set of tabbed panels, only one of which is visible at a time.

     

    This is a simple design, and seems pretty logical.  There’s a problem, however.  The contents of the toolbar can change substantially depending on which panel is selected in the main panel bar.  For some panels this is maybe not such a big deal.  At the other extreme, consider the Net panel.  When the Net Panel is selected, it adds seven buttons to the toolbar which control what information is displayed inside the Net panel’s content area.

    For me, using these buttons, with this placement, feels even weirder than the screenshot looks.  They would feel much more natural if they were adjacent to the content area that they affect.

    Furthermore, the side panel bar is only visible when either the HTML tab or the Script tab is selected.  And, of course, it’s a different set of panels that show up in the side panel bar depending on whether it’s the HTML panel, or the Script panel.  Logically, you could think of the HTML panel as having a smaller panel bar embedded inside it which in turn contains several sub-panels specific to the HTML view.  You can think of the Script panel similarly.  The other panels simply don’t have any sub-panels they need to display.

    Basically, the problem is that clicking a tab changes not only the main panel below the tab strip, but also the toolbar above it, and oftentimes the side panel bar to the right as well.  You click on a tab, and the whole UI mutates around it.  Even though the current UI layout makes superficial sense, I contend that the toolbar – or at least a big chunk of it – belongs inside the main panel bar.  I also think you can make an argument that the side panel bar, when it appears, should look like it’s inside the currently selected panel.

    Another way to think of it: The tabs – Console, HTML, CSS, Script, DOM, and Net – should appear at the top of the Firebug UI where the panel-specific toolbar elements appear now.  Since these main panels are the major organizing principle of the Firebug UI, it makes sense that they should be more prominent.

    I’m clearly not the first person to come to this conclusion since Firebug Issue 222 makes a similar argument.

    This is the layout I have working now.  I’ve been referring to this as the “tabs-on-top” layout.

    In this design, not all of the toolbar has been relocated.  The main Firebug menu that hangs off the Firebug button is in the top-left just as it always has, and the search elements and Firebug window control elements are still at the top right.  Although the side panel bar doesn’t really appear to be inside the main panel, it does at least appear subservient. 

    There are some smaller changes as well.  Notably the tabs are now “stand-up” style –

    , rather than “hanging” style – .

    The handling of the main panel Options button and the Inspect button still leave something to be desired.  Nevertheless, I think this is a more logical layout, although you probably have to use it to really appreciate the improvement.

    What now?

    As I mentioned above, you really need to try the out the new layout to see if it’s really does work better.  To that end, I’ve attached an experimental version of the Firebug extension to the post, so you can download it and try it out.  I’ve tested it fairly thoroughly on Windows and to a lesser extent on Mac and Linux.  Also, since it’s built on top of the 1.4 branch, which is still under active development, you probably won’t want to leave it installed for too long.  However, it should be good enough for you to take it for a test drive.

     

    ]]>
    Curtis Bartley