tag:curtisb.posthaven.com,2013:/posts Curtis Bartley's Blog 2016-12-06T02:45:05Z Curtis Bartley tag:curtisb.posthaven.com,2013:Post/1113214 2016-12-06T02:45:04Z 2016-12-06T02:45:05Z Someday Soon "Chemtrails" May Be Real
Airplane contrails are real, but we know what they are: water vapor and small amounts of other combustion products created by jet engines.  The other pollutants may be somewhat harmful, but they are basically the same thing as produced by any other form of combustion.  So what are chemtrails?  Wikipedia sums it up nicely:

The chemtrail conspiracy theory is the unproven belief that long-lasting trails, so-called "chemtrails", are left in the sky by high-flying aircraft and that they consist of chemical or biological agents deliberately sprayed for sinister purposes undisclosed to the general public. [1]

When I say chemtrails may be real in the future, I'm really only making the claim that high-flying aircraft will be deliberately spraying chemicals that are not simply a normal by-product of jet engine operation.  However, it won't carried out for "sinister" reasons and it won't be done in secret.  Instead it will only be done after a great deal of public discussion and scientific deliberation.

The chemical in question?  Sulfur dioxide. 

In 1991 Mount Pinatubo in the Philippines produced the second largest volcanic eruption of the 20th century.  Besides massive amounts of volcanic ash, Pinatubo injected an estimated 17 million metric tons of sulfur dioxide into the atmosphere.  This caused measurable global cooling of about four-tenths of a degree Celsius from 1991 to 1993.  The effects of the eruption lasted for about three years. [2]

The Mount Pinatubo eruption and its worldwide effect on the climate has raised the possibility that we could intentionally inject sulfur dioxide into the atmosphere to counter-act the effects of global warming.  This kind of large scale intentional intervention is frequently called "geoengineering" although Wikipedia prefers the term "climate engineering". [3]

Seventeen million tons of sulfur dioxide might seem like a lot, but over one one-year period in 2013 and 2014 U.S. airlines alone consumed about 50 million tons of aviation fuel.  If jet aircraft were burning fuel (at altitude) as sulfur-heavy as marine bunker fuel (3 or 4%) then we could be talking about the equivalent of a Mount Pinatubo eruption every couple of years. [4][5][6]

A massive climate engineering project to inject sulfur dioxide into the stratosphere is clearly plausible, but is it a good idea?  We can do it, but should we?  There are two big questions we need to answer.  How useful is it? And: How risky is it?

Earth's surface has warmed by about 0.85 degrees Celsius from 1880 to 2012.  In comparison, the effects of the 1991 Mount Pinatubo eruption lasted for a couple of years and caused global cooling that was estimated to be about 0.4 degrees Celsius.  Pinatubo-scale sulfur dioxide injection -- if sustained -- could offset a large proportion of the global warming that we've already experienced.  Of course sulfur dioxide injection only treats the symptoms of global warming, not the root cause.  But it sure looks like it could provide some real value as an intermediate stop-gap solution while other more permanent solutions are brought online. [7][2]

The data from the Mount Pinatubo eruption tells us that sulfur dioxide in the stratosphere can cause cooling comparable to the global warming we've seen so far.  But it tells us something else as well: The effects of stratospheric sulfur dioxide have a limited lifetime, on the order of just a couple of years.  This is bad in the sense that to be useful, the injection program has to be sustained.  But it is good -- very, very good -- in the sense that it reduces risk.  The program could be scaled up, the impact could be measured, and, if the side-effects are too serious, it can be scaled back down again very rapidly.  This greatly reduces the risk of the undertaking.

Global warming melts polar ice packs and ice sheets, exposing darker water and land underneath.  The reduction in reflectivity results in more of the sun's energy being absorbed by the earth, resulting in warmer temperatures, resulting in more ice melting, resulting in more land and water being exposed, resulting in even warmer temperatures.  This is often incorrectly referred to as a "negative feedback loop".  In reality it is a positive feedback loop with negative consequences.  There are other feedback loops of great concern.  For example, as arctic regions get warmer, methane that has been trapped in permafrost and undersea clathrates is released.  And since methane is a powerful greenhouse gas, it too can cause a positive feedback loop. [8][9]

The offset cooling provided by stratospheric sulfur dioxide injection would interfere with these feedback loops.  This could be particularly valuable in the case of methane, which is a more powerful greenhouse gas than carbon dioxide but which also has a quite short lifetime in the atmosphere.

Stratospheric sulfur dioxide injection is not a panacea.  But what it can do is buy us time.  It can buy us time to replace fossil fuels with cleaner sources of energy that don't release carbon dioxide into the atmosphere. It can buy us time to replace less energy efficient technologies with more efficient ones. It can buy us time to deploy large scale climate engineering projects that do remove carbon dioxide from the atmosphere. It can buy us time to deal with the immediate consequences of global warming and climate change (and we are simply going to have to deal with some of them), and indeed it can buy time for developing economies to, well, develop.  Because everything we need to do is going to be easier if everybody has more money.

Now, to get back to my original point.  I don't know for sure that we're going to be using jet aircraft to inject sulfur dioxide into the atmosphere, and I'm no expert anyway.  But I think we're going to do it.  And if we do, the result will be, quite literally, "chemtrails".

  • [1] https://en.wikipedia.org/wiki/Chemtrail_conspiracy_theory
    • The chemtrail conspiracy theory is the unproven belief that long-lasting trails, so-called "chemtrails", are left in the sky by high-flying aircraft and that they consist of chemical or biological agents deliberately sprayed for sinister purposes undisclosed to the general public.
  • [2] https://en.wikipedia.org/wiki/Mount_Pinatubo
    • The second-largest volcanic eruption of the 20th century, and by far the largest eruption to affect a densely populated area, occurred at Mount Pinatubo on June 15, 1991.
    • The injection of aerosols into the stratosphere is thought to have been the largest since the eruption of Krakatoa in 1883, with a total mass of SO2 of about 17,000,000 t (19,000,000 short tons) being injected—the largest volume ever recorded by modern instruments.
    • This very large stratospheric injection resulted in a reduction in the normal amount of sunlight reaching the Earth's surface by roughly 10% (see figure). This led to a decrease in northern hemisphere average temperatures of 0.5–0.6 °C (0.9–1.1 °F) and a global fall of about 0.4 °C (0.7 °F).
    • The stratospheric cloud from the eruption persisted in the atmosphere for three years after the eruption.
  • [3] https://en.wikipedia.org/wiki/Climate_engineering
    • Some proposed climate engineering methods employ methods that have analogues in natural phenomena such as stratospheric sulfur aerosols and cloud condensation nuclei. As such, studies about the efficacy of these methods can draw on information already available from other research, such as that following the 1991 eruption of Mount Pinatubo.
    • Climate engineering, commonly referred to as geoengineering, also known as climate intervention,[1] is the deliberate and large-scale intervention in the Earth’s climatic system with the aim of limiting adverse climate change.
  • [5] https://en.wikipedia.org/wiki/Fuel_oil
    • Governing bodies (i.e., California, European Union) around the world have established Emission Control Areas (ECA) which limit the maximum sulfur of fuels burned in their ports to limit pollution, reducing the percentage of sulfur and other particulates from 4.5% m/m to as little as .10% as of 2015 inside an ECA. As of 2013 3.5% continued to be permitted outside an ECA, but the International Maritime Organization has planned to lower the sulfur content requirement outside the ECA's to 0,5% m/m.
  • [6] Math
    • How much sulfur dioxide could be produced by burning 16.2 billion gallons of fuel which is 3.5% sulfur by weight?
      • > 16.2 billion gallons
      • > 61.3 billion liters                   ; 3.78541 liters per gallon
      • > 47.5 billion kg                       ; 775 g/l or 0.775 kg/l, the lower figure for jet fuel
      • > 47.5 million tons                     ; 1000kg per metric ton
      • > 1.66 million tons                     ; tons sulfur, assuming 3.5% sulfur by weight
      • > 3.32 million tons                     ; sulfur dioxide is about 1/2 sulfur by mass (32.06 / 64.066)
    • Answer: 3.32 million tons sulfur dioxide
      • This number is at best an estimate.  Aircraft would only burn sulfur heavy fuels at altitude, so not all fuel consumed would produce stratospheric sulfur dioxide.  On the other hand, average jet fuel density is no doubt greater than 775 g/l, and sulfur concentrations of up to 4.5% have been reported for marine bunker fuel.
      • Most importantly, U.S. air travel only accounts for a fraction of total air travel -- a large fraction, no doubt, but probably less than half.
        • > 3.32 million tons                    ; assume U.S. aircraft inject 3.32 million tons of sulfur dioxide
        • > 6.64 million tons                    ; assume European and Asian aircraft can inject a similar amount
        • > 13.3 million tons                    ; assume a 2 year time horizon
  • [7] https://en.wikipedia.org/wiki/Global_warming
    • The global average (land and ocean) surface temperature shows a warming of 0.85 [0.65 to 1.06] °C in the period 1880 to 2012, based on multiple independently produced datasets.
  • [9] https://en.wikipedia.org/wiki/Arctic_methane_emissions
    • Arctic methane release is the release of methane from seas and soils in permafrost regions of the Arctic, due to deglaciation. While a long-term natural process, it is exacerbated by global warming. This results in a positive feedback effect, as methane is itself a powerful greenhouse gas.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/864694 2015-06-03T02:21:29Z 2015-06-03T02:21:30Z Faded Paper Figures: "The Persuaded"
Faded Paper Figures: "The Persuaded": 
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/817307 2015-03-02T07:10:44Z 2015-03-02T07:10:45Z Soldier's Eyes - Jack Savoretti
Soldier's Eyes - Jack Savoretti: 
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/813793 2015-02-20T18:53:01Z 2015-02-20T19:40:14Z Self-sinking capsules and cryobots
Probing Of The Interior Layers Of The Earth With Self-sinking Capsules: http://www.cmp.caltech.edu/refael/league/radioactive-core-earth.pdf

It is shown that self-sinking of a spherical probe in the form of a capsule filled with radionuclides, whose decay heats and melts the rock in its path, deep into the Earth is possible. Information on the formation, structure, and shifts deep in the Earth can be obtained by recording and analyzing acoustic signals from the recrystallization of the rock by the probe.

Similar technology for penetrating ice:

A cryobot or Philberth-probe is a robot that can penetrate water ice. A cryobot uses heat to melt the ice, and gravity to sink downward. The difference between the cryobot and a drill is that the cryobot uses less power than a drill[citation needed] and doesn't pollute the environment.

A cryobot or Philberth-probe is a robot that can penetrate water ice. A cryobot uses heat to melt the ice, and gravity to sink downward.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/806167 2015-02-03T03:57:37Z 2015-02-03T03:57:37Z Two songs by Ringside
Ringside - Wikipedia
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/793630 2015-01-09T07:02:35Z 2015-01-09T07:02:35Z Curiosity wheel damage: The problem and solutions (link)
This is an excellent article about the unexpected wheel damage experience by the Curiosity rover: Curiosity wheel damage: The problem and solutions.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/761753 2014-10-28T23:38:57Z 2014-10-28T23:38:57Z Greg Brockman on figuring out the CTO role at Stripe
Greg Brockman on figuring out the CTO role at Stripe: #define CTO.

This post also had some good stuff about figuring out the VP of Engineering role, since Brockman's counterpart was figuring that out at the same time.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/630852 2013-12-16T07:37:21Z 2013-12-16T07:37:22Z More than you ever wanted to know about Tunnel Boring Machines
More than you ever wanted to know about Tunnel Boring Machines:
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/624811 2013-11-30T06:20:38Z 2013-11-30T06:21:37Z Some thoughts on the Ender's Game movie
From the previews, I expected that Ender's Game the movie had taken major liberties with Ender's Game the novel.  After seeing the movie, I have to say that the liberties taken were surprisingly small.  In fact I'm not sure I've ever seen any movie based on a novel try so hard to be faithful to the source material.

Unfortunately there's a downside -- there were a number of scenes (too many of them) that seemed quite forced precisely because they were trying so hard to cram in themes from the book that there just wasn't time for in a two hour movie.

This is an age old dilemma; What do you want?  A more faithful adaptation of the source material or a better movie?

As a fan of the novel, I have to say I am extremely pleased to see that this movie was made by people who understood and clearly cared about the novel.  On the other hand, I think science fiction cinema really suffered for this choice.  I'm really torn.  I really think they should have taken more liberties with the story and made a better movie.  On the other hand, if they'd done that I'd probably be writing a blog post complaining about the changes.

One last thing to think about -- Ender's Game was originally a short story before it was a novel.  Adapting the short story might have made for a much better movie.  But then you end up with a movie that doesn't address the morality of the xenocide against the Buggers at all, which is an important (and maybe the most important) theme of the novel.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/597176 2013-08-26T04:02:40Z 2013-10-08T17:29:00Z Kent - Sundance Kid
This song turned up on Pandora, on my "The Dandy Warhols" channel.

Kent -Sundance Kid

You might imagine that a band named Kent is British, but in fact they're Swedish, with lyrics predominantly in Swedish.  Sundance Kid, at least, is incredibly catchy, which is kind of a problem when the lyrics are in a language you don't speak.

Lyrics with translation.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393066 2010-04-05T18:01:00Z 2013-10-08T16:46:37Z 404 Error Page Status

Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This is being tracked in bug 482874.



The primary sticking point is the review of docshell/resources/content/notFoundError.xhtml.  The network error pages seem to be poorly owned, and it’s not clear to me who to even ask for review.

Final L10N approval is waiting on error page approval.

There are some relatively minor changes to nsDocShell.cpp since Boris Zbarsky’s approval last year.  I don’t think this is a big deal, but it is C++ and it is in docshell, so it probably needs a further look.

There’s one minor outstanding change needed for RTL approval.

Next Steps

A code reviewer for notFoundError.xhtml needs to come forward and help me out. Any suggestions?

If a code reviewer materializes and provides code review, I’ll take care of the necessary code changes.  I have already tried (without success) to try to run down a code reviewer myself, so I need help from someone to do that.


  • Just a reminder: I am not longer a Mozilla employee, just a volunteer.  I have time right now to work on this feature.  In another month or two this may not be the case.
  • This page has been translated to Belorussian, see http://pc.de/pages/404-error-page-status-be.


Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393067 2010-02-28T19:38:00Z 2013-10-08T16:46:37Z Parameterizing ENTITY definitions for use in XHTML

It sometimes makes sense to parameterize localizable strings.  For example, in the 404 error page, I need to display a string that looks like:

 "Search {site} for {search-phrase}"

If this string were in a .properties file, it might actually look like:

  "Search %S for %P"

However, since I need to reference the string from a non-privileged XHTML page, I have to use an  ENTITY definition in a ".dtd" file.  The "%" character is not legal in an entity definition, at least not as a literal.  And, if we want to parameterize an entity, we have to roll our own parameterization scheme anyway.  For XHTML files, it turns out the simplest way to do this is to embed XHTML markup in the entity definition, for example:

  "Search <span id='site'/> for <span id='searchPhrase'/>"

It's then trivial to look these elements up using JavaScript and inject the appropriate contents at runtime.  This makes for ugly looking strings, but it's dirt simple to implement.

It occurred to me the other day, that since you can embed references to other entities inside, you could replace the SPAN markup with something like:

  "Search &param.site; for &param.searchPhrase;"

A more complete example might look like:

  <!ENTITY httpFileNotFound.searchSiteFor
      "Search &httpFileNotFound.paramSite; for &httpFileNotFound.paramSearchPhrase;">

where the parameter entity definitions look something like:

  <!ENTITY httpFileNotFound.paramSite "<span id='site'/>">
  <!ENTITY httpFileNotFound.paramSearchPhrase "<span id='searchPhrase'/>">

This looks better to me than having the SPAN elements inlined.  It would look even better if we could dispense with the "httpFileNotFound." qualification and just say:

  <!ENTITY httpFileNotFound.searchSiteFor "Search &paramSite; for &paramSearchPhrase;">

or even:

  <!ENTITY httpFileNotFound.searchSiteFor "Search &SITE; for &SEARCH_PHRASE;">

What do you think?]]>
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393070 2009-09-29T23:24:00Z 2013-10-08T16:46:37Z Insertion of tracing code using the C++ preprocessor

A few months back I got this wild idea that you could insert trace-logging code (not jit-tracing, that's something else) into more or less arbitrary C++ by redefining certain C++ keywords as macros. It turns out that you can, in fact, make this work, at least for certain non-trivial cases. Notably, I've been able to compile an instrumented version of Firefox on OS X using this technique.

I'm not sure whether the technique is generally useful, but given our new focus on crashes and hangs, I thought I'd at least mention it. It's probably not particularly useful for crashes but it may have some utility for debugging hangs. I used an earlier version of this code to debug a Firebug hang (Bug 497028), although in that case I already had a good idea where the problem was and I probably could have used the debugger just as effectively.

Here's the latest tracing code, which I have in a file called "quicktrace.h":

#ifdef __cplusplus    #include <stdio.h>    #include <string.h>    static void QuickTrace(const char *fileName, int lineNumber)    {      if (lineNumber % 100 != 0) return;//    if (strstr(fileName, "/js/") != NULL) return;      fprintf(stderr, ";;; %s %d\n", fileName, lineNumber);    }    #define if     if (QuickTrace(__FILE__, __LINE__), 0) {throw 0;} else if    #define for    if (0) {throw 0;} else for    #define switch if (0) {throw 0;} else switch    #define do     if (0) {throw 0;} else do    #endif

I inserted this code globally by adding the following two lines to my .mozconfig:

export CXXFLAGS="-include /Users/cbartley/dev/mozilla-e/src/quicktrace.h"  ac_add_options --enable-cpp-exceptions

Note that the use of exceptions is non-functional -- the only purpose is to suppress warnings about code paths that don't return a value. If you don't care about the warnings you can dispense with the "throw 0" statements in the header file and the --enable-cpp-exceptions in your .mozconfig file.


  • This is unquestionably an abuse of the preprocessor. Don't go down this route unless you really think you know what you're doing.
  • If you really want to create an instrumented build, there are better ways to do it, for example Taras Glek's "Pork" tool. Even using shell scripts and sed to create an instrumented source tree might be a better choice.
  • I think the code in this post will probably work on any GCC-based build, but I don't know for sure. As far as I can tell, Visual C++ doesn't have an equivalent of GCC's "-include" flag and I haven't tried this on Windows.
  • To do useful trace logging you really want a global implementation of the "QuickTrace" function. This is somewhat challenging for Firefox because of the way it's linked. I'm leaving this part as an exercise to the reader.
  • The Firefox build actually compiles some tools from source and then uses those tools later on in the build. This means that if you're going to generate logging from the trace function you do not want to send it to stdout.
  • It might make more sense to include the quicktrace.h header file from another very commonly used header file. This might be the only option on Windows anyway.
  • I think it should be pretty easy to build an instrumented Firefox using this approach and the Try server. I haven't actually attempted this yet, though.
  • If you generate output every time the trace function is called, it will be unbearably slow.
  • You can't generate a proper call graph using this approach, although with some post-processing you could probably get a useful approximation of a call graph.
  • There's probably some caveat that I forgot to mention.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393071 2009-09-26T17:46:05Z 2013-10-08T16:46:37Z 404 Error Page Status Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This is being tracked in bug 482874.


On hold for the week while I worked on higher priority Firefox 3.6 stuff.  There are still some L10N issues to address and some general touch up before I can roll a new patch.

Next Steps

Address remaining issues raised in last round of code review and get another patch out.  This will probably have to wait until after the Firefox 3.6 code freeze that's coming up.

  • The feature is controlled by a pref so it can be turned off.
  • Webmasters who don't want their 404 error pages to be overridden may have to add padding to their 404 error pages.  However, since IE and Google Chrome are already overriding 404 error pages using a similar size test, webmasters already need to do this.
  • We want to provide a way for the user to see the original 404 error page, but that's not in this patch.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393073 2009-09-26T17:45:47Z 2013-10-08T16:46:37Z Troubleshooting Information (AKA about:support) Status Feature Description

Mozilla's support organization has a longstanding request for a Firefox diagnostic page that can provide information about the user's Firefox installation such as which extensions are installed and what prefs have been modified.  The result is the "Troubleshooting Information" page, which is also accessible by typing "about:support" in the location bar.  This feature is fairly constrained for 3.6 since we didn't start work on it until a few days before string freeze. 

Screenshot: https://bug367596.bugzilla.mozilla.org/attachment.cgi?id=400664
Project Page: Firefox/Projects/about:support
Bug: 367596 -  (about:support) [RFE] Create a support info page/lister.


  • Landed!  Troubleshooting Information is now available in both trunk and Firefox 3.6 nightlies.
Next Steps
  •  Bug 518601 -  Troubleshooting Information page should not allow copy-and-paste of the profile directory.  This is a potential security issue and needs to be addressed before Firefox 3.6 ships.
  •  Bug 516616 -  Add an "Installation History" section to about:support.   Nice to have for 3.6.
  •  Bug 516617 -  Add an "Update History" section to about:support.  Nice to have for 3.6.

  • This is a starting point, not an ending point.  We can extend the functionality for Firefox 3.7.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393075 2009-09-19T16:59:25Z 2013-10-08T16:46:37Z Troubleshooting Information (AKA about:support) Status Feature Description

Mozilla's support organization has a longstanding request for a Firefox diagnostic page that can provide information about the user's Firefox installation such as which extensions are installed and what prefs have been modified.  The result is the "Troubleshooting Information" page, which is also accessible by typing "about:support" in the location bar.  This feature is fairly constrained for 3.6 since we didn't start work on it until a few days before string freeze. 

Screenshot: https://bug367596.bugzilla.mozilla.org/attachment.cgi?id=400664
Project Page: Firefox/Projects/about:support
Bug: 367596 -  (about:support) [RFE] Create a support info page/lister.


  • The implementation went through seven or so revisions in the days just prior to Firefox 3.6 string freeze.
  • Current page design includes "Application Basics" (App name, version, profile directory, etc.), "Installed Extensions", and "Modified Preferences".
  • Strings for all of the above were landed on trunk and the 1.9.2 (Firefox 3.6) branch in a strings-only patch.  This patch also included strings for "Installation History" and "Update History" sections, in hopes that we can get those features in as well for Firefox 3.6 (Bugs 516616 and 516617).
  • Discovered a performance bug in the FUEL preferences API (Bug 517312).  I've rewritten part of this code to use the lower-level preference API, which avoids the bug and is just much speedier all around.
Next Steps

Address the remaining issues from the last round of code review, and see if we can get the main patch landed.

  • This is a starting point, not an ending point.  We can extend the functionality for Firefox 3.7.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393077 2009-09-19T16:58:25Z 2013-10-08T16:46:37Z 404 Error Page Status Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This is being tracked in bug 482874.


Final work was delayed while I worked on about:support/Troubleshooting Information.  Since then I've rewritten the Places part of the patch in JavaScript as requested in the most recent code review.  I still have some L10N issues to address and some general touch up.

Next Steps

    Pinky: "Gee Brain, what do you want to do next week?"
    The Brain: "The same thing we do every week, Pinky—try to land this patch!"

  • The feature is controlled by a pref so it can be turned off.
  • Webmasters who don't want their 404 error pages to be overridden may have to add padding to their 404 error pages.  However, since IE and Google Chrome are already overriding 404 error pages using a similar size test, webmasters already need to do this.
  • We want to provide a way for the user to see the original 404 error page, but that's not in this patch.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393079 2009-09-08T16:57:07Z 2013-10-08T16:46:37Z 404 Error Page Status Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This is being tracked in bug 482874.


Found and worked around an SQLite bug (Bug 514291).  Did some performance analysisis, suggestions query tweaking, etc.

Latest patch is attached to the bug and review is requested.

A try server build is available
Next Steps
  • If the reviews come back positive, get it landed.
  • If the reviews raise issues, address them ASAP.
  • The feature is controlled by a pref so it can be turned off.
  • Webmasters who don't want their 404 error pages to be overridden may have to add padding to their 404 error pages.  However, since IE and Google Chrome are already overriding 404 error pages using a similar size test, webmasters already need to do this.
  • We want to provide a way for the user to see the original 404 error page, but that's not in this patch.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393081 2009-08-30T16:05:40Z 2013-10-08T16:46:37Z 404 Error Page Status Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.  This feature is being tracked in bug 482874.


Almost ready to go.  Notable progress this week:
  • The docshell changes have r+ from bz with just a few changes, most of which I've already made.
  • Axel has provided some feedback on localization.
  • Johnath has provided feedback on the XHTML.
  • Marco has provided some feedback about the Places changes.  Notably he points out that query performance could be an issue on large databases.
  • David Dahl has provided me with a "max_places" test database.  The bad news is the URL suggestions query can take ~30 seconds to run against this database.  The good news is that a very simple length-check guard in the query can make a big difference.
Next Steps
  • Address  the localization changes Axel raised, and then get formal L10N review.
  • Address the performance issues in the URL suggestions query.  (Probably just a LIMIT-based approach for the initial patch)
  • Get a formal review of Places changes (Dietrich did an informal review a few weeks ago).
  • Get a final UI review.
  • Make sure all the parts of the patch have r+ from the various parties.
  • Land the thing.
Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393082 2009-08-24T02:28:00Z 2013-10-08T16:46:37Z 404 Error Page Status

Feature Description

We want Firefox to override server-supplied 404 error pages in certain cases.  The Firefox 404 page should provide the user with tools we think are more useful to the user for resolving the situation, than the server supplied page.  This includes alternate ("did you mean") URLs derived from the Places database and some pre-loaded search links/controls.


The newest patch is almost ready for review.

This patch is mostly focused on making the code right, but there are a couple of functional changes:
  • The Firefox 404 error page is now pref-controlled.  There's a pref to turn it on and off, and a pref to limit which server-supplied 404 pages we'll override.  Currently we will only override server 404 pages of 512 bytes or less.
  • We're dropping the background image for now.

Next Steps

  • Complete the patch and get it in for review.
  • Blog about it.


  • This feature is being tracked by Bug 482874 - Provide a friendlier/more useful alternative when the user encounters a 404 error page.

Edit: Added Notes section.

Curtis Bartley
tag:curtisb.posthaven.com,2013:Post/393083 2009-08-03T06:23:42Z 2013-10-08T16:46:37Z Building Firefox under Eclipse/CDT, or I gotta have my identifier completion Introduction

I've been using Eclipse as a C++ IDE for Mozilla development, which apparently puts me in a distinct minority among Mozilla developers.  Since I'm neither an Emacs nor a Vim user, I needed an alternative.  Frankly, Eclipse is not all that great as an editor, but having an IDE with halfway decent identifier completion and code browsing goes a long ways towards making up the difference.  It's not all that hard to set up an Eclipse project for mozilla-central (i.e. Firefox), but it's not exactly obvious either.  The purpose of this blog post is to explain one way to do so.

Disclaimer and Caveats

I'm not claiming that this is the optimal way to set up mozilla-central under Eclipse.  In fact, I'm not even claiming that it's right.  All I'm saying is that this approach is relatively simple and seems to actually work.  For expediency, these instructions make some assumptions.  First, they assume that you are developing under OS X.  I believe that they are easily adaptable to Linux and Windows+cygwin (or Windows+mozilla-build), but I don't know for sure.  Second, the instructions assume that you are already able to build mozilla-central from the command-line.  If you don't already know how to do that, then you don't want to start here.

Disclaimer #2

As a final disclaimer, this approach is relatively new to me.  I've mostly been using an older version of the CDT under Eclipse 3.0 and I'm not even going to mention what I had to do to get indexing to work properly.  This new approach is a lot simpler, and I'm reasonably confident that it will work well for sustained use.


I originally wrote these instructions in text-only form.  I've since added some marked-up screenshots for many of the steps.  These screenshots are attached at the end of the post.


And last, but not least, thanks to Mike "firetoad" Collins for trying out a slightly earlier version of these instructions.

Now the instructions.

Install Eclipse/CDT

  • Obviously you need Eclipse.  I will offer some advice, but I'm largely assuming that you can figure this part out for yourself.  These instructions assume that you are using Eclipse 3.5, better known as Galileo.  You need Eclipse with the CDT, the "C/C++ Development Tooling".  The easiest way to get this is to go to http://www.eclipse.org/downloads/ and download the pre-packaged Eclipse IDE for C/C++ Developers, which is what I've done.  I have an existing Eclipse installation, but rather than trying to upgrade, I've just got them side-by-side, which doesn't seem to be a problem as long as you set up separate workspaces for them.
  • By default Eclipse runs with a 256M heap, which is too small to reliably index the mozilla-central tree.  I've been running with 512M without any problem, although I've not really pushed it very hard yet.  There ought to be a way to configure the Eclipse heap size through the UI, but if there is, I haven't found it yet.  In the meantime, I've been launching it from the command-line, like so:
    • /Applications/eclipse-galileo/Eclipse-galileo.app/Contents/MacOS/eclipse -vmargs -Xmx512M &
  • Your path will differ, of course.  Also note that the path above is a little bit unusual because I've renamed the actual Eclipse application so it won't conflict with my other (older) Eclipse installation.
  • By default, Eclipse engages "Scalability Mode" for large files, which means it turns off most of the cool features.  I've increased the threshold to 15,000 lines (nsDocShell.cpp is about 11,000 lines of code, for example).  To change this setting, choose Eclipse >> Preferences, then type "scalability" into the search box.

Create a new project

  • Start Eclipse (but see the note above about making sure you're running it with enough memory).
  • Ctrl-click in the Project Explorer, choose New >> C++ Project
    • this will open the C++ Project dialog
  • In the C++ Project dialog,
    • In the Project name text field, enter the project name
      • I'm using firefox-eclipse
    • Uncheck Use default location
    • In the Location text field, enter the full path for the project source code.
      • I'm using /Users/cbartley/Dev/firefox-eclipse/src
    • Under Project type, choose Makefile project >> Empty Project
      • Note that other project options can cause Eclipse to create a default makefile, which you don't want.
    • Click Finish
  • You should now see the project firefox-eclipse in the Project Explorer.

Configure the project

  • Ctrl-click on firefox-eclipse in the Project Explorer, and choose Properties.
    • This will bring up the Properties for firefox-eclipse dialog.
  • In the Properties for firefox-eclipse dialog,
    • Select the Resource panel
      • Note the path after Location in the Resource panel.
        • This is the same path you entered a few steps above, but if you're like me, you've probably forgotten it already.
    • Select the C/C++ Build panel
      • Select Builder Settings inside the C/C++ Build panel
        • Uncheck Use default build command
        • In the Build command text field, enter:
          • bash -l -c 'make -f client.mk $0 $1' -b
            • this hairy looking command invokes a bash shell which in turn invokes the actual make command.  The dash-small-L option tells it to behave like a login shell.  This sidesteps some environment problems that I've run into when running make directly from Eclipse.  I won't elaborate on the $0, $1, or -b, except to tell you to be careful that you put the quotes in the right place!
        • In the Build directory text field, enter the path to the project source
          • I'm using /Users/cbartley/Dev/firefox-eclipse/src
            • Note that this the same path found in the Location text field on the Resource panel. I usually just Cmd-C copy it from there any more, to reduce the possibility of a typo
      • Select the Behavior tab inside the C/C++ Build panel
        • In the Build (Incremental build) text field, delete all and just leave the field empty.
    • Expand the C/C++ Build tab to show the tabs for its subpanels.
      • Select the Settings sub-panel
        • Select the Mach-O parser under Binary Parsers
    • Expand the C/C++ General tab to show the tabs for its sub-panels
      • Select the Index sub-panel
        • Check Enable project specific settings
        • Under Select indexer
          • Choose Full C/C++ Indexer (complete parse)
          • Check Index unused headers as C++ files
    • Click Apply
  • The project is now set up, but there's no source code yet.

Get the Source Code

  • Eclipse has created the directory for the project, but it's empty.  I've created my project at /Users/cbartley/Dev/firefox-eclipse/src.  Following standard convention, I want to check the mozilla-central project out to a directory named src, the same one as above, in fact.  The problem is that Mercurial complains if the directory already exists.  Checking the source out first doesn't help, because then Eclipse complains that the directory already exists.  I'm going to check mozilla-central out to a different location and then manually merge them.  Note that Eclipse has created some files inside the src directory that we want to preserve.
  • cd into the project directory
    • I do cd /Users/cbartley/Dev/firefox-eclipse/src
  • cd out one level
    • cd ..
      • I'm now in /Users/cbartley/Dev/firefox-eclipse
  • Now type the following commands
    • hg clone http://hg.mozilla.org/mozilla-central src-aside  # get the source code
    • mv src-aside/.h* src                                      # move mercurial files into src
    • mv src-aside/* src                                        # move regular files into src
    • rmdir src-aside                                           # remove the now superfluous src-aside directory

Set up the .mozconfig file

  • Create a basic .mozconfig file for the project; I'm assuming you've already got a usable one some place.
    • For example, I do cp ../mozilla/src/.mozconfig src
  • Make sure the "-s" option is NOT on for make
    • Normally make displays the complete command line for building each file.  When invoked with the "-s" option, however, it only displays the name of the file being built.  This is a problem for building under Eclipse, since Eclipse parses the build output to figure out where the header files are.
    • In my existing .mozconfig, I have the following line:
      • mk_add_options MOZ_MAKE_FLAGS="-s -j4"        # before
    •     I delete the "-s", leaving
      • mk_add_options MOZ_MAKE_FLAGS="-j4"           # after

    Build the project under Eclipse

    • Make sure that firefox-eclipse is selected in the Project Explorer.
    • Select the Console tab in the bottom pane so you can see the build output
    • Select Project >> Build Project from the menu
    • Wait

    Build the project under Eclipse, again

    • We want to do Clean rebuild under Eclipse; This extra step seems to be necessary for Eclipse to find the IDL-generated header files.  Don't ask me why.
    • Make sure that firefox-eclipse is selected in the Project Explorer
    • Select the Console tab in the bottom pane
    • Select Project >> Clean... from the menu
      • Make sure that Start a build immediately is checked
      • Click OK
    • Wait

    Make sure indexing is started

    • After the build completes Eclipse should start Indexing automatically.  A progress indicator should appear in the status bar on the lower right.
    • If indexing hasn't started, you can invoke it manually by:
      • Ctrl-click on firefox-eclipse in the Project Explorer, and choose Index >> Rebuild
    • Indexing takes something like four hours on my machine.  Eclipse remains usable in the meantime, you just won't have access to the more advanced code-completion and browsing features.

    After Indexing is completed

    • Cmd-Shift-R is kind of like the Awesome Bar for source files.
    • Cmd-Shift-T is kind of like the Awesome Bar for types, functions, etc.
    • Ctrl-Space invokes identifier completion.
    • Just typing identifier. or identifier-> will show a list of member variables and functions if you pause for a second.
    • F3 will take you to the declaration of an identifier
    • F3 over a #include will open the header file.
    • Hovering over a macro will show you its expansion.
    • Actually, many of the features will work in full (e.g. Cmd-Shift-R) or in part (e.g. identifier completion) before indexing is completed.

    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393084 2009-07-15T01:09:57Z 2013-10-08T16:46:37Z How do you surf the firehose? How do you surf the firehose?

    I'm not sure that "drinking from the firehose" is the right metaphor for Mozilla.  I still claim it's more like having people constantly shooting you with water pistols from every different direction.

    There's not much method to my madness, but here's how I try to keep up with the firehose:

    I'm subscribed to four newsgroups in Thunderbird:
    • mozilla.dev.planning
    • mozilla.dev.apps.firefox
    • mozilla.dev.platform
    • mozilla.dev.tree-management
    I don't actually read them any more, with the exception of mozilla.dev.planning which I now have set up to deliver straight to my main Mozilla email account.  (Because Mike Beltzner told me to.  "There's hardly any traffic on that list anyway" I recall him saying.)  Since conventional email is hardly used at Mozilla, this seems to work OK.

    I also have bugzilla notifications going to my main email account.  I don't know what I was thinking.

    I'm subscribed by email (my personal email) to the Firebug Group on Google Groups.  In hindsight this was a mistake, but I haven't fixed it yet.  I'm mostly skimming the thread subjects at this point, but I at least have some inkling of what's going on there.  I check this a couple of times a day when I check my personal email.

    I read planet.mozilla.com every couple of days.  Anymore I mostly skim it, reading just the occasional post.  I really should try to check it every day, since I find it pretty useful to the extent that I do read it.

    I peruse reddit and news.ycombinator.com (Hacker News) several times a day, and slashdot every now and then.  If a blog post doesn't make it to reddit or Hacker News, I probably won't see it.  I don't feel like I'm missing a lot of stuff by doing this, but on the other hand I do feel like I have to wade through a lot of crap to find the stuff I do want to read.  (Ha, Sturgeons Law: 90% of everything is crap.  Curtis's corollary to Sturgeons Law: On the Internet, 90% of the rest is crap too.)

    I don't do twitter, largely because it seems to scratch an itch I don't have.

    I have three hours or so of phone meetings every week, not counting the weekly Mozilla.org meeting.  I take these meetings fairly seriously since I'm working remotely.  I don't take notes, but I do attempt to actually pay attention.  I'm prone to zoning out even in meetings I'm physically in, and it's worse when I'm dialed in remotely.  My secret weapon?  I usually surf pictures on Flickr while I'm on the phone.  This works surprisingly well since it doesn't require any major effort from the verbal part of my brain.  In the future I should probably take notes at least some of the time.  Like any time Mike Beltzner is saying something important.

    I don't use a feed reader.  In the past I've used Bloglines and Google Reader.  I abandoned Google Reader because I always hated its infinite scrolling model.  (I don't remember why I hated it; I like infinite scrolling in other cases) .  I do have a master list of blogs I read.  Despite the dozens of blogs on this list, I only read a few regularly these days, and none of them are technical blogs anymore.

    And of course Mozilla lives and breathes IRC.  I'm using Colloquy as my IRC client.  It's not wonderful, but it works OK.

    Tag, you're it.

    Robcee's original post (complete with standard questions which I totally ignored) was designed to be viral.  I can't beg off this part, since I put him up to it.  Seriously, though.  I'm only asking because I think I might learn something.
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393085 2009-05-08T21:36:35Z 2013-10-08T16:46:37Z What should Mozilla look for in an automated review system? In last week's All Hands when talking about possible process improvements, Mike Shaver asked if there was any interest in some sort of automated review system.  Google's Mondrian is probably the most well known review system.  Review Board seems to be the best known open source review system.  I've never used Review Board, but as an ex-Googler, I have used Mondrian.

    I think Mozilla would really benefit from a good review system.  My review system experience is limited to Mondrian, which is, of course, proprietary to Google.  I don't know how Review Board and the other open source systems measure up, and, as I'm about to get to, I think Mondrian itself has some notable deficiencies.  My overall point is that if we decide to adopt a review system, we should choose carefully.  I'll try to offer some guidance about how to do that.

    Quick Overview of Mondrian

    Mondrian has a dashboard view for outstanding changelists but to me the essence of Mondrian is its UI for reviewing a single changelist, and that's what I'm going to talk about here.  At the highest level you have a list of files modified in the changelist.  You can click on each file and get a diff of that file against the baseline in a side-by-side view.  You can easily see which lines were added, modified, or deleted.  OK, everybody is familiar with side-by-side diffs these days.  Here's the cool part though: A reviewer can click on any line and leave an inline note or comment.  Afterwards, the reviewee can go through the diff and respond to the comments.  The way this often works in practice is the reviewer leaves an instruction in the comment and the reviewee responds by making the requested change and replying to the comment with "Done."  In fact, there's a "Done" button as well as a "Reply" button on the comment just to make this last part easy.

    Once the reviewee has updated the changelist to the reviewer's satisfaction, the reviewer will approve the change and the reviewee can submit it to the source control system.

    What's not to like?

    Mondrian's review UI is great for simple line-by-line reviews, things like:
    • "You don't need to check for null here."
    • "You should check for an error here."
    • "Fix indentation" 
    Sometimes this level of line-by-line scrutiny is really useful.  For example, exception-free C++ code often requires the programmer to check virtually every single function call for errors.  This is hard for even the best programmers to get 100% right.  But let's be clear here.  What Mondrian is really good for, all the time, is line-by-line nitpicking of code.  And frankly, line-by-line nitpicking of code is already pretty easy without some fancy tool like Mondrian to make it extra easy.

    Mondrian's review comment system really seemed to encourage a style where there was a one-way flow of instructions from the reviewer to the reviewee: "Do this.  Do this.  Do this." and the reviewee replies with "Done.  Done.  Done."  Sometimes this is appropriate, but oftentimes it isn't.

    Of course the reviewee always has the option of clicking "Reply" instead of "Done".  You could have a whole thread of comments and replies if you wanted to.  But given the limitations of the UI, that would be kind of like communicating with short messages written on post-it notes.  And not regular sized post-it notes either, but rather those extra tiny ones.

    So Mondrian not only encouraged a review focus on individiual lines, it also tended to encourage a one-way flow of information from reviewer to reviewee which could easily degrade into a one-way flow of commands.

    What would I want in a "good" review system?

    It may seem like I'm arguing that a review system should actively discourage line-by-line review, and that's not actually the case.  I think that review style is often useful, and pretty much any review system, good or bad, will support it.

    There are really two fundamental things that I do want out of a review system.
    • The system should go out of its way to support a bi-directional flow of information between the reviewer and the reviewee.  In the extreme, it should provide a means of carrying on an actual conversation.  This could be supported within the review system itself, but even a simple means of escalating a conversation to email (or even IRC) might be a big help.
    • The system should support higher level discussions about the code under review.  Actually I think it should go so far as to encourage this kind of information flow.  You can sort of do this with Mondrian, but you are usually better off just going to email.

    Some general guidance

    I've put a fair amount of thought into how I'd like an ideal review system to work.  Mostly I've been thinking in terms of concrete features, but that's of limited utility if what you want to do is choose between existing review systems rather than writing a review system yourself.

    So I'm trying to figure out some general guidance.  I think what I'm trying to say in this post more than anything is that affordances matter.  Mondrian, for example, seems to afford the line-by-line scrutinizing, nitpicking approach to code reviews.  It also seems to afford a model where the reviewer simply gives instructions to the reviewee and the reviewee just automatically carries them out.

    Mondrian offers some minimal affordance for discussing (as opposed to simply commenting on) a particular line of code, but it could do a lot more.  Notably it does not seem to offer any real affordance for discussion at the design level, which has always seemed to me like a serious omission.

    A simple concrete example

    Here's one simple way that Mondrian could be improved which I hope will illustrate my point about affordances.  As described above, Mondrian comments have two buttons, "Reply" and "Done".  It could offer other choices as well, so you might have:

    "Reply", "Done", "Done w/comment", "Defend", etc.

    These latter two functions could easily be done with "Reply", but if you give them their own buttons, you explicitly tell the reviewee that there are other options that can be considered here.  In particular, in this case, they give the reviewee permission to say "I had a reason for doing this the way I did, would you please consider my reasoning?".

    My larger point here is that a review system's UI strongly shapes the code review process, and it can shape it in good ways or bad ways.  As a result, we want to think not just about what we want out of a code review system, but also what we want for the code review process itself.

    Super-short Summary

    1. A good review system needs to support two-way information flow, and it should probably go so far as to actively encourage it.
    2. A good review system should support review at a higher level than simply line-by-line, and it should probably go so far as to actively encourage it.
    3. Affordances matter.


    • This post describes Mondrian as of about nine months ago when I last used it.  It may have received a major upgrade in that time, I don't know.  Mondrian may also have had useful features that I didn't know about -- not all of its features had good discoverability.
    • Probably the most common code review case at Google is a reviewer and reviewee who are peers, have known each other for months if not longer, and who sit near each other, often in the same room.  Mondrian's limitations are a lot less important in this scenario, since a lot of out-of-band communication can happen the old fashioned way, by talking face-to-face.
    • In some circumstances you can have reviewers and reviewees who work in different locations and who have never met in person.  This is not uncommon at Google, and it is in fact very common at Mozilla.  In this scenario, the way the review system works becomes much more important.  
    • The obvious way to get around Mondrian limitations is to fall back to email.  I ultimately started emailing concerned parties requesting informal reviews before requesting formal reviews through Mondrian.  Mondrian can still be used in this case -- it can see pretty much any outstanding changelist.  Nevertheless, by making an informal request in email, you could get a high-level review of a changelist without getting mired in low-level details.  And since this was through email rather than Mondrian comments, you could hold an actual conversation about it.  It turned out my team's superstar intern had independently started doing the same thing.  I jokingly referred to this as "doing a review before the review".
    • This post might lead you to believe that I'm a Mondrian-hater.  Actually, I think Mondrian is a very good tool, I just feel like they quit working on it before they were done.
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393086 2009-04-04T16:47:00Z 2015-04-06T22:04:12Z The Firebug tabs-on-top UI refactoring has landed

    Note: Since I was getting lots of comment spam I've closed comments here.  If you have any comments or suggestions, feel free to file them on the Firebug Issues page or drop us a line in the Firebug discussion group in the Tabs on Top thread.

    The Firebug tabs-on-top UI refactoring has finally landed in Firebug 1.4 alpha.  You can download it as firebug-1.4.0a17.xpi (or later) from http://getfirebug.com/releases/firebug/1.4.  This change is pretty close to the one I described and prototyped in February (see Improving the Firebug UI) and which in turn follows my original proposal from late last year (see Firebug UI: Tabs on the Top).  These blog posts describe the reasoning behind the change so I won't rehash it in full.

    The old layout had a toolbar at the top of the Firebug UI, and a tabbed "panel bar" below it.  Sometimes a tabbed "side panel bar" would show up to the right of the main panel bar.  This change essentially takes the toolbar and the side panel bar (when it appears) and puts them inside the active panel.  This puts the panel tabs at the top of the Firebug UI (hence the name "tabs-on-top") and panel specific controls underneath.  Several controls that are not panel-specific (the main Firebug menu, the search box, and the detach window and close window buttons) have been moved to the tab strip, so they are effectively in the same location as before the change.  The idea is that UI elements that are specific to a panel look like they are inside that panel.  This description probably makes the change sound more complicated that it really is.  A screenshot will better communicate how it looks.

    Firebug UI with the tabs-on-top layout.  The Net panel is selected.

    Even the screenshot is a poor substitute for actually downloading the extension and trying it out, so let me encourage you to do that.

    There are still some outstanding issues, before this change can really be called complete.
    • The location of the main panel Options menu is less than ideal.  I think the right thing to do here is to merge the panel options with the menus on the panel tabs, but I haven't had time to prototype it yet.
    • The Inspect button really belongs on the top toolbar.  I've held off on relocating it because I think it will look too much like the label of a panel tab.  There are a couple of options here.  We could change the styling to make it look more like a button.  We could replace the label with an icon, or we could change the tab styling so even unselected tabs show a tab outline.  Probably any of these solutions would work, we just need to figure out what the best one is.
    • The position of the debugger buttons can jump drastically under some conditions.  Normally the debugger buttons only appear when the Script panel is active.  However, if JavaScript execution is paused, the debugger buttons always appear, regardless of the tab.  With the new layout this means their position can jump if you switch to another tab which doesn't have a side panel bar.  I think it might make sense to move them to the left side of the toolbar, which will eliminate this problem.  This seems like it might be a fairly controversial change, so we really need to explore it separately.
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393087 2009-03-06T18:47:00Z 2013-10-08T16:46:37Z What are "futures", and why should you care?


    Last week we were discussing, among many other things, ways to speed up Firefox during startup.  One obvious option was to move more of our I/O off of the main thread.  This in turn involves making more code asynchronous, and asynchronous code is simply harder to manage.  Mike Shaver mentioned something about "futures" as a possible way to handle the extra complexity, and then the discussion moved on to something else.

    I'm not exactly an expert, but I've not just used futures, I've written my own implementations in JavaScript and even Object Pascal (in hindsight I'm not sure the latter was a good idea, but it was certainly an interesting exercise).  Futures seem esoteric, but they really shouldn't be -- the idea is really quite simple.  In this post I'll try to explain what futures are and how they can be used to make asynchronous programming easier.

    What exactly is a future anyway?

    In the simplest form, a future works like an IOU.  I can't give you the money you've asked for right now, but I can give you this IOU.  At some point in the future, you can give me the IOU and I'll give you the money -- if I have it.  If I don't have it yet, then you can wait until I do.  I get paid on Friday.

    Alternatively there's the dry cleaner metaphor.  You drop your clothes off on Monday and the clerk gives you a ticket that you can use later to reclaim your clothes after they've been cleaned.  The clothes will be ready on Tuesday morning, but if you show up too early, you'll have to wait.  On the other hand, if there's no hurry, you can just do other stuff on Tuesday and show up on Wednesday with a reasonable expectation that they'll be ready when you arrive.  You'll just hand your ticket over, collect your clothes, and be on your way.

    A future is similar to the IOU (or the dry cleaning ticket).  It gives you a way to represent the result of a computation that has not yet completed, and it allows you to access that result once it becomes available.  So you can call a function which starts some asynchronous process but doesn't wait for it to finish.  Nevertheless the function can return you a useful result: a future which can be used to claim the real result later.

    Of course if you ask for the result too soon, you'll have to wait.  On the other hand, if the result becomes available before you want it, then it will wait for you.

    A simple example

    Here's an example of what this might look like in pseudo-JavaScript:

    function doStuff() {
      var cleanClothesFuture = dryCleaner.dropOff(dirtyClothes);
      var cleanClothes = cleanClothesFuture.get();  // block if the result is not ready yet

    Compare this to the traditional way we'd handled this in JavaScript, using a callback:

    var cleanClothes = null;

    function doStuff() {
      dryCleaner.dropOff(dirtyClothes, function (clothes) { cleanClothes = clothes; });

    These examples are not one hundred percent semantically identical, but they should be close enough to illustrate the point.  I contend that the first function is easier to write, easier to read, and easier to reason about.  I also contend that the difference isn't enough to get excited about.  It's when things get more complicated that futures become really useful.

    A more complicated example

    Imagine that I have a web page that sends an AJAX request to a server and then displays the results in an IFRAME -- and furthermore does it automatically on page load.  I have to wait for both the AJAX request to return data and for the IFRAME to finish loading -- only then can I display the results.  This can be done fairly simply using callbacks:

    function showData(dataUrl, iframeUrl) {
      var data = null;
      var iframeBody = null;
    tryToShowData { if (data && iframeBody) { showDataInIframe(data, iframeBody); } }
      requestDataFromServer(dataUrl, function (response) {data = response.data;  tryToShowData() });
      insertIframeBody(iframeUrl, function (iframeDoc) {iframeBody = iframeDoc.body; tryToShowData() });

    Now, imagine the same thing done with futures:

    function showData(dataUrl, iframeUrl) {
      var dataFuture = requestDataFromServer(dataUrl);
      var iframeBodyFuture = insertIframeBody(iframeUrl);
      showDataInIframe(dataFuture.get(), iframeBodyFuture.get());

    Again, these two examples are not semantically equivalent -- notably there's no blocking in the first example.  Now let's imagine that we had a way to turn an ordinary function into a new function which takes futures as arguments and which returns a future in turn.  As soon as all the future arguments became available, the base function would be called automatically -- and once the base function completed, its result would be accessible through the future returned earlier.  I'll call this capability "callAsap": call a function as soon as possible after all of its future arguments become available.  Using callAsap(), the previous example might be rewritten as:

    function showData(dataUrl, iframeUrl) {
      var dataFuture = requestDataFromServer(dataUrl);
      var iframeBodyFuture = insertIframeBody(iframeUrl);
      showDataInIframe.callAsap(dataFuture, iframeBodyFuture);

    In this case we don't care about the return value for showDataInFrame.  This example is much closer in behavior to the earlier callback-based example.  In fact, the callAsap() method would be implemented with callbacks underneath, but they would all be nicely abstracted away under the hood.

    One of the nice things about callAsap() is that it can nicely handle cases where you are waiting on more than one future.  Imagine that you've asynchronously requested data from two different servers:

    function showData(dataUrl1, dataUrl2, iframeUrl) {
        var dataFuture1 = requestDataFromServer(dataUrl1);
        var dataFuture2 = requestDataFromServer(dataUrl2);
        showDataInIframe.callAsap(dataFuture1, dataFuture2, iframeBodyFuture);

    This segues nicely into the next topic: Arrays of futures.

    Arrays of futures

    Imagine if you have not one, or two, or three futures, but rather an arbitrary number of futures.  What we'd really like to have is a way to take an array of futures and produce from it a single future for an array of concrete values.  Something like:

    function showData(dataUrlArray, iframeUrls) {

      // The "dataFutureArray" is a concrete array of futures.
      var dataFutureArray = requestDataFromServers(dataUrlArray);

      // The "dataArrayFuture" is a future for a concrete array of concrete values.
      var dataArrayFuture = Future.createArrayFuture(dataFutureArray);

      showDataInIframe.callAsap(dataArrayFuture, iframeBodyFuture);

    What this example might look like rewritten in callback style is left as an exercise to the reader.

    An advanced example

    OK, now for a more elaborate example.  Imagine a function which retrieves the first page of Google search results for a particular query, and then goes through and re-orders the results based on its own ranking system.  Furthermore, imagine that this ranking is computed based on the contents of each web page.  We'll need to to make requests to many different servers for many different web pages.  This will be fastest if we issue all the requests at once.

    function search(query) {

      // Take a concrete search result and return a future to a
      // [searchResult, ranking] pair.

      function requestWebPageAndRanking(searchResult) {
        var webPageFuture = requestWebPage(searchResult.url);
        var rankingFuture = computeRankingFromContent.callAsap(webPageFuture);
        return Future.createArrayFuture([webPageFuture, relevanceFuture]);


      // Take a concrete array of search results and return a future to
      // an array of [searchResult, ranking] pairs, sorted by ranking.

      function requestSearchResultsSortedByRanking(searchResultArray) {

    rankingArrayFuture = Future.createArrayFuture(
          [requestWebPageAndRanking(searchResult) for (searchResult in searchResultArray)]
        return sortArraysByKeyIndex.callAsap(
    rankingArrayFuture, 1);

      // Request search results, re-rank them, and then display them.
      var searchResultArrayFuture = requestGoogleResults(query);
      var sortedRankingArrayFuture =

    In all fairness, this is not as simple as a synchronous blocking implementation.  Keeping your arrays of futures and futures of arrays straight is a little bit taxing.  Imagine what a callback model might look like, however, with callbacks inside callbacks.  One advantage of using futures is that you can often write traditional blocking code and then in straightforward fashion translate that code into asynchronous code using futures.


    Notes, in no particular order

    • The examples may look like JavaScript, but they are, in fact, pseudo-code.  The implementation of some of the helper methods, psuedo-code or otherwise, are left to the imagination.
    • I have completely glossed over error handling, including such interesting topics as exception tunneling, fallback values (nulls, empty arrays, NullObjects), not to mention timeouts and retries.  If this sound scary, it's because error handling in any kind of async code is a difficult topic.  Futures don't make the situation any worse, and might make it better.
    • The name "callAsap" is my invention, although I'm certain the underlying idea has been invented independently many times.  Also note that callAsap() and Future.createArrayFuture() are fundamentally quite similar.
    • Java futures (java.util.concurrent.future) use a blocking get() method like the one in the first example.  I don't actually know how you could do a blocking get in conventional single-threaded JavaScript, which is the whole genesis of callAsap().  Practical JavaScript futures need to be "listenable futures" which raise events when they resolve.  The methods callAsap() and Future.createArrayFuture() can then be implemented using this capability.  Client code can then use these methods to avoid writing explicit callbacks.
    • The re-ranking Google search results example is contrived, but it's based on a similar project I did a few years ago.  In that project I used callbacks, and it was quite painful.
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393088 2009-02-17T18:02:00Z 2015-04-06T22:01:25Z Improving the Firebug UI Refactoring the Firebug UI

    My main project for the last few weeks has been doing some Firebug UI work that I’ve been thinking about for a while.  Since I’ve just been moving UI elements around (to make the layout more logical), I’ve been referring to this effort as “UI refactoring”.

    So here’s the deal.  It’s not enough that I think this redesign is an improvement, or even that the Firebug working group has been supportive.  We need to know what you, the Firebug user, think.

    Before I get started, I should let you know that I’ve been working on the Firebug 1.4 branch, which is still undergoing active development.  The changes I’m proposing are fairly simple in the sense that they are really only a reorganization of existing UI elements.  However, if you have not yet seen 1.4, be aware that there are other significant UI changes, notably John J. Barton’s work on the activation UI, kpdecker’s multi-file search capability, and Hans Hillen’s work on accessibility.

    What’s wrong with the current Firebug UI?

    The basic UI layout for Firebug today consists of a toolbar across the top, a main “panel bar” underneath, and an optional “side panel bar” to the right.  Each panel bar consists of a set of tabbed panels, only one of which is visible at a time.


    This is a simple design, and seems pretty logical.  There’s a problem, however.  The contents of the toolbar can change substantially depending on which panel is selected in the main panel bar.  For some panels this is maybe not such a big deal.  At the other extreme, consider the Net panel.  When the Net Panel is selected, it adds seven buttons to the toolbar which control what information is displayed inside the Net panel’s content area.

    For me, using these buttons, with this placement, feels even weirder than the screenshot looks.  They would feel much more natural if they were adjacent to the content area that they affect.

    Furthermore, the side panel bar is only visible when either the HTML tab or the Script tab is selected.  And, of course, it’s a different set of panels that show up in the side panel bar depending on whether it’s the HTML panel, or the Script panel.  Logically, you could think of the HTML panel as having a smaller panel bar embedded inside it which in turn contains several sub-panels specific to the HTML view.  You can think of the Script panel similarly.  The other panels simply don’t have any sub-panels they need to display.

    Basically, the problem is that clicking a tab changes not only the main panel below the tab strip, but also the toolbar above it, and oftentimes the side panel bar to the right as well.  You click on a tab, and the whole UI mutates around it.  Even though the current UI layout makes superficial sense, I contend that the toolbar – or at least a big chunk of it – belongs inside the main panel bar.  I also think you can make an argument that the side panel bar, when it appears, should look like it’s inside the currently selected panel.

    Another way to think of it: The tabs – Console, HTML, CSS, Script, DOM, and Net – should appear at the top of the Firebug UI where the panel-specific toolbar elements appear now.  Since these main panels are the major organizing principle of the Firebug UI, it makes sense that they should be more prominent.

    I’m clearly not the first person to come to this conclusion since Firebug Issue 222 makes a similar argument.

    This is the layout I have working now.  I’ve been referring to this as the “tabs-on-top” layout.

    In this design, not all of the toolbar has been relocated.  The main Firebug menu that hangs off the Firebug button is in the top-left just as it always has, and the search elements and Firebug window control elements are still at the top right.  Although the side panel bar doesn’t really appear to be inside the main panel, it does at least appear subservient. 

    There are some smaller changes as well.  Notably the tabs are now “stand-up” style –

    , rather than “hanging” style – .

    The handling of the main panel Options button and the Inspect button still leave something to be desired.  Nevertheless, I think this is a more logical layout, although you probably have to use it to really appreciate the improvement.

    What now?

    As I mentioned above, you really need to try the out the new layout to see if it’s really does work better.  To that end, I’ve attached an experimental version of the Firebug extension to the post, so you can download it and try it out.  I’ve tested it fairly thoroughly on Windows and to a lesser extent on Mac and Linux.  Also, since it’s built on top of the 1.4 branch, which is still under active development, you probably won’t want to leave it installed for too long.  However, it should be good enough for you to take it for a test drive.


    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393089 2009-02-15T05:16:44Z 2013-10-08T16:46:37Z Green Trees, Information Radiators, and Photo Frames One of the first things you learn about as a Mozilla developer is Tinderbox, Mozilla's automated build and test system.  Mozilla's Tinderbox hosts a number of "trees", specific software projects that are built and tested independently.  Tinderbox has a page that displays the current status of whatever tree you might be interested in.   For example, the display for the Firefox 3.1 tree is here.  If everything is wonderful, the top of each column on the web page is green, indicating that the most recent build and test run on that machine was successful.  This is called a "green tree".

    In reality, the tree is rarely one hundred percent green.  Developers with patches ready for checkin (ready to "land" in Mozilla parlance) are constantly checking to see if the tree is green.  This can get old.  Justin Dolske, in a fit of Wallian laziness, developed isthetreegreen.com to simplify the process.

    It's an unfortunate fact that lately the tree has rarely been green and it's become more a matter of "Is the tree green enough?".  I discovered that if I had the tree status page open in Firefox, I could essentially see the entire width of page if I stretched the window to the full width of my screen and made the text as small as possible.  This way I could get a feel for how green the tree was at a glance.

    The problem with this approach is that most of the time the browser window was obscured by other windows.  The other problem was that I kept accidentally clicking on it, or accidentally opening other pages in the really wide window, or hiding it and forgetting about it.  It would have worked better if I had a second monitor configured, but frankly, if I had a second monitor I'd want to use all the screen real estate for active work anyway.

    What I'm really looking for is some kind of ambient information display.  I want to be able to catch a high level view of the tree status with just a glance, but without it being in my way as I try to work.  This is similar to Alistair Cockburn's concept of an Information Radiator.  That's it, I want my own personal information radiator.

    Then the other day, I noticed digital photo frames at Fred Meyer.  These have been around for awhile, but it's the first time I noticed them on the shelves at the local discount market (most likely because I haven't been paying attention).  I thought, hey wouldn't that be cool if you could set one up to display the tree?  But are there any that can run a web browser?

    It turns out there is.  Sony has a digital photo frame that runs Opera.  However, it looks like it's only available in Japan.  On the other hand, Samsung now has photo frames that support UbiSync, which is their USB video technology.  So technically, you'd just be using photo frames as a more less standard multiple monitor setup.  The upside is that UbiSync doesn't require video card support, and you can daisy chain multiple UbiSync photo frames -- apparently as many as five or six.  An UbiSync monitor doesn't have the raw performance of a regular monitor running off of a video card, but for displaying slow-changing information like the Tinderbox tree, it would be plenty fast enough.  Of course if you were going to go this route, it would probably make sense to get your hands on a Samsung 22 inch UbiSync monitor, rather than one of the 10 inch photo frames.]]>
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393090 2009-01-24T03:33:27Z 2013-10-08T16:46:37Z Creating Icons with JavaScript and SVG I'm a programmer, not an artist.  Nevertheless on occasion I find myself thinking about icons.  Worse yet, sometimes I find myself thinking about designing new icons from scratch.  I've got neither the patience nor the skill to build an icon one pixel at a time.  Instead, I've been using JavaScript to dynamically build icons in SVG.  This may be the hard way to build a single icon; however, if you're doing a bunch of variations on a theme, then it can be a definite win.  That's basically the situation I'm in now -- I have a basic arrow design in a variety of orientations and with a couple of possible decorations.  I'm also generating icons in a range of sizes since I don't what the optimal size will be.

    I'm not far enough along to really talk about the icon design itself.  I can, however, address one important but mundane issue with this technique, namely: How do you render SVG into a transparent PNG?  Now, if you've got a garden variety SVG file, you can simply load it into Inkscape and export it to PNG, no problem.  If, on the other hand, you've got an SVG file that contains embedded JavaScript, this won't work, since Inkscape won't run the JavaScript.

    Instead I've been using a two step process where I use Firefox to run the JavaScript, and Inkscape to generate the PNG.

    1. Load your SVG file (which, remember, must run some JavaScript) into Firefox.
    2. Right-click and choose Select All from the context menu.
    3. Right-click and choose View Selection Source from the context menu.
    4. Type Ctrl-S/Cmd-S to save the SVG to a new file.
    5. Load the new file into Inkscape.
    6. In Inkscape, select File >> Export Bitmap

    There are probably better ways of doing this, especially if you find yourself doing it repeatedly.  For example, Vlad suggests that an entirely JavaScript-based solution is possible using the <canvas> element and the drawWindow() method.  That, as they say, is left as an exercise to the reader.]]>
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393091 2009-01-20T02:29:49Z 2013-10-08T16:46:37Z A better history command The Bash history command lists your numbered command history -- all of it by default.  Other than limiting the output to the most recent N commands, it doesn't give you any real control over the output.  As a consequence I often find myself typing things like

     history | grep find

    or even (I use find a lot)

     history | grep find | grep cpp

    I found myself doing this so much that I eventually bit the bullet and wrote a better history command provides built in grep-like filtering.  So now I can type things like

     hist find
     hist find cpp
     hist find cpp viewsource

    The last command corresponds to something like

     history | grep -i find | grep -i cpp | grep -i viewsource

    which is a pain to type in full.

    I've been using hist for a while now, and I've found it very useful, so I thought I'd share it in case there are other Bash users out there who might find it handy.  If you want to use it just paste the following into your .bash_profile:

     # Quick and dirty case-insensitive filtered history command.
     # "hist" ==> "history"
     # "hist foo" ==> "history | grep -i foo"
     # "hist foo bar" ==> "history | grep -i foo | grep -i bar"
     # etc.
     # Note that quotes are ignored, e.g.
     #   <<<hist "foo bar">>> is equivalent to <<<hist foo bar>>>
         HISTORYCMD="history $@"             # "foo bar" ==> "history foo bar"
         HISTORYCMD="${HISTORYCMD% }"        # "history " ==> "history" (no trailing space)
         eval "${HISTORYCMD// / | grep -i }" # "history foo bar" ==>
                                             #   "history | grep -i foo | grep -i bar"
    Curtis Bartley
    tag:curtisb.posthaven.com,2013:Post/393092 2008-12-18T23:22:03Z 2013-10-08T16:46:37Z A handy bash function for creating "file://" URLs I often find myself loading test files from my local filesystem into Firefox with "file://" URLs.  Unfortunately Firefox on OS X (or at least my local build of 3.1) is maddeningly inconsistent about recognizing an argument as a filesystem path or a URL.  So for example "ffz test.html" ("ffz" is my launch script) usually opens "file:///Users/cbartley/Dev/mozilla/test.html" (which I want) but sometimes tries to open "http://www.test.html/" (which doesn't even make sense).

    It seemed sensible to switch to simply supplying a full file URL on the command line so there's no confusion.  The downside is that file URLs need to be absolute and constructing an absolute file path on the fly is a pain.  Fortunately, I can make the computer do this work for me with a handy bash function.  The "fileurl" function below takes a path to a file (usually a relative path) and prints the corresponding "file://" URL for it.  So for example:
      fileurl ../test.html
    might print

    I can invoke my test build with something like
      ffz `fileurl ../test.html`
    and have it reliably open the file every time.

    I haven't yet rolled fileurl into the ffz script, but that's the logical next step.  I should also acknowledge that there may well be better ways to do this than the approach I've used -- I am no bash expert.  Finally, for all those other amateur bash programmers out there, the magical Google phrase for finding documentation on the weird things you can do with variables in bash is "variable mangling".

    # Given a path to a file (relative or absolute), print a "file://" URL for
    # that file.
      # Split the directory name out of the argument path.
      #   "/dir/subdir/file" ==> "/dir/subdir"
      #   "dir/subdir/file" ==> "dir/subdir"
      #   "subdir/file" ==> "subdir"
      #   "file" ==> "."
      TEMP="/$1"                            # Hack: Prepend a slash so there's at least one
      TEMP="${TEMP%/*}"                     # Chop off the trailing "/file"
      TEMP="${TEMP#/}"                      # Remove the leading slash if it's stil there
      DIRNAME="${TEMP:-.}"                  # If DIRNAME is empty, set it to "."

        # Get the base file name from the argument path.
      BASENAME="${1##*/}"                   # Remove everything up to the last slash, inclusive

        # Convert the directory name to an absolute path.
      ABSDIRNAME=$(cd "$DIRNAME"; pwd)

        # Echo the file URL built from the components.
      echo "file://$ABSDIRNAME/$BASENAME"
    Curtis Bartley