Blogging Done Right

It’s weird to write about blogging. Its like a recursive, self referencing thought that works in the real world but throws exceptions inside my brain. It something we look at, see many people do, and don’t do ourselves. Blogging consistently is Hard. Capital H. I always have things to say. Writing them is whats hard. Refining them is hard. Deciding if my non-existent-audience would want to read what I’m writing is hard. I’m constantly finding things that I don’t think would make a good blog post, or don’t have the time to write something I feel is worth reading. So I’m not going to try to write good blog posts. I’m just going to write about what interests me. If you like it, great! If not, welcome to the other 7 billion 36 million 776 thousand and 270 people in this world right now. (Don’t you love the Internet? independent research, unverifiable stats, random sites with javascript tickers? Yeah, it’s awesome)

Besides, no one reads my blog anyways. Except you. Congratulations. Hi.

I think this was all sparked by Steve Yegge. I ran across his post on phone screening when I was looking for some guidelines on interviewing people. I mean come on, the guy worked at Amazon and now Google, so you know he’s smart. Then you start reading and realize its half practical, half satire, and all slightly funny at the same time. Just what I needed. If your a programmer and want something entertaining to read, read his tour of programming languages:

More to the topic at hand, I haven’t posted something on my site since January, and I have a pile of notes just SITTING around waiting to be turned into nice neat, well formed blog posts. But that doesn’t matter, not unless they’re actually turned into blog posts. Which as I suck at doing consistently. Thus The Final Conclusion™ is that I just need to write. Then write more. Then get better by DOING on a regular basis and improving my writing as I go along and not being THAT perfectionist. You know, the one that has to have the perfect article at the perfect time correctly targeted at just the right audience… that doesn’t exist. Like I said, no one reads my blog anyways.

So this is a kick in my own pants. A reminder to not care about what people think. To write about what interests me.

Font Hybridization in HTML 5

Over this Christmas break, I was over at a friends house and got to sit down and tinker with my friends Mac. I’d forgotten how good fonts look, and I suddenly realized why so many sites are now using custom web fonts. Since I primarily use Windows at work and at home, I get annoyed by fonts that are difficult to read or that don’t render well on the screen. Somehow, I’ve never been 100% happy with font rendering on windows. It’s been getting better, but it’s still not as good as a Mac is. Maybe its the screen, maybe its the OS, maybe its the app, maybe its a combination of all of the above. Because I’ve been on this HTML kick, and I’m on windows, I’ve tended toward Cufon as my font replacement tool of choice when building sites since it’s the only one that produces a “more-reasonable” output on my machine.

But Cufon by it’s nature has several drawbacks. First, you can’t copy and paste text rendered with Cufon. With most of the newer browsers you can select it, but there’s not much more you can do beyond that. So, I usually limit Cufon usage to titles, headers, and the more “design-ery” aspects of a page. Second, because Cufon is image based, if someone zooms in on your text beyond 100% Cufon rendered text gets all fuzzy the same way it would if you zoomed in on an image. And finally, because Cufon renders with javascript on the client, there’s no way to cache the text. Javascript is fast, but when you’re rendering a large amount of text on a phone, there’s usually not a good way around a flash of unstyled content. And it happens each time you go to a new page because it can’t be cached.

Web fonts on the other hand, allow you to use fonts in a very similar manner as if you had them installed on your device with native rendering by the OS. You can select, copy, paste, and use it as you would any other piece of text. Web fonts are cacheable, so although you could get a flash of unstyled content when you first visit a page, subsequent visits should render immediately. The disadvantage however, is the OS. On windows, the rendering of fonts just… Sucks. So people don’t use it.

But what if there was a way to do a hybrid between Cufon and Web Fonts?

Besides the rendering issue, Web Fonts are the best option. They’re the most flexible and the most future proof. But they still suffer on Windows, some phones, and on older browsers that don’t support the new @font-face CSS syntax. So what if we did a hybrid? Use @font-face, then fall back on Cufon for older browsers, and for windows. The advantage is that on a new browser the@font-face‘s will be cached, used for the initial rendering of the page, and then cleaned up with Cufon later.

Using Modernizr and a little custom javascript to do user agent testing I put together a page to test out the hybrid font idea:

It’s a rough draft. I have some screen shots below from both Mac and PC’s (click for full size versions).



For simplicity, the javascript code to test and load Cufon and my custom cufon polyfill looks like this:

  2. function rendersNiceFontface() {
  3. result = navigator.appVersion.indexOf("Win") != -1
  4. || navigator.appVersion.indexOf("Android") != -1;
  5. return result;
  6. }
  8. var supportsNiceFontface = !rendersNiceFontface();
  10. Modernizr.load([
  11. {
  12. test : Modernizr.fontface && Modernizr.canvas && supportsNiceFontface,
  14. nope : [ 'cufon-yui.js', 'BebasNeueRegular_400.font.js', 'cufon-polyfill.js' ]
  15. }
  16. ])

Using the Modernizr yepnope.js, I’m able to completely skip loading Cufon at all if the browser supports good @font-face rules. There’s more that I’d have to do to clean it up before I’d use it in a real setting, but it demonstrate the concept, and is something I could definitely use later as a @font-face polyfill. It does have some drawbacks though, you have to maintain both your CSS rules and your Cufon replacement calls, and Cufon doesn’t work well with a large amount of body text, so if you don’t support @font-face, I’d fall back to a good secondary font and forgo Cufon in those cases.

I hope this got some gears turning, I’m looking forward to some comments.

CSS Is A Lie.

According to Wikipedia:

Cascading Style Sheets (CSS) is a style sheet language used to describe the presentation semantics (the look and formatting) of a document written in a markup language.


CSS specifies a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities or weights are calculated and assigned to rules, so that the results are predictable.

From my informal office survey, the general consensus is that the most important thing™ in CSS is that everything ‘cascades’ correctly. That is, that rules and styles defined further down in a CSS document override styles specified further up in the document (excluding the !important operator of course, which you should not be using anyways). It makes sense if you think about it, after all it’s not called a Cascading Style Sheet for nothing.

Now, the test. Given the following HTML and CSS snippet, what color will the two paragraphs be?


  1. <p id="myid" class="myclass">
  2. Hello World.
  3. </p>
  5. <p class="myclass">
  6. This is another line of text
  7. </p>


  1. #myid {
  2. color: red;
  3. }
  5. .myclass {
  6. color: blue;
  7. }
  9. p {
  10. color: green;
  11. }

The Answer:
Hello World is red, the second line of text is blue.

Don’t believe me? I put up a demo page with just the html and css here.

After a decent amount of Googling and reading blogs and the W3C Spec on CSS Selectors, I realize now how styles are calculated and applied, but seems to go completely against the ‘cascading’ nature of style sheets since the cascading nature of style sheets only applies when the ‘specificity’ values are the same.

For example, the following snippet behaves as you would expect:


  1. p {
  2. color: gray;
  3. font-size: 20px;
  4. }
  6. /* ... later ... */
  8. p {
  9. color: green;
  10. }

As you might expect the paragraph will be green, and have a font size of 20px. The rule that is more specific selectors takes precedence over rules that are less specific regardless of where that rule is declared in the markup. You can mentally calculate how specific a rule by the ‘level’ and number of the selectors used. From low to high, you have

  1. Element and pseudo elements such as p, a, span, div, :first-line
  2. Class, attributes, pseudo-classes .myclass
  3. Id attributes #myelement #theonething
  4. From inline style attributes
  5. !important

If there’s anything of a higher level, the higher level overrides the lower level style, and if they’re the same level, the higher count wins, and if they’re at the same level with the same count (.myclass, and .myotherclass) then the one further down takes precedence, this is the “only” time cascading actually happens.

Its something that is unfortunately very subtle because of the way we write CSS, your taught from the begin to start with the most basic generic styles and work your way through the more specific styles. While this is correct, it’s very easy to go for a long time without running into a situation where a more specific styles.

I had no idea up until about a week ago that CSS worked like this. I always assumed that specificity was only used to select the subset of elements the rule applied to, and that if you applied a more general rule after a more specific rule that the more general rule would overwrite anything earlier. This is obviously not the case. If you want to read more, here’s some links to a few more comprehensive articles on the subject:

CSS Specificity and Inheritance:

Star Wars and CSS Specificity:

Specifics on CSS Specificity:

Art and Zen of CSS:

EventHandler<T> or Action<T>

If you’ve used C# for any length of time, you’ve used events. Most likely, you wrote something like this:

  2. public class MyCoolCSharpClass {
  3. public event EventHandler MyCoolEvent;
  4. }
  6. public class MyOtherClass {
  7. public void MyOtherMethod(MyCoolCSharpClass obj)
  8. {
  9. obj.MyCoolEvent += WhenTheEventFires;
  10. }
  12. private void WhenTheEventFires(object sender, EventArgs args)
  13. {
  14. Console.WriteLine("Hello World!");
  15. }
  16. }

Later, you need parameters to be passed in along with the event, so you changed it to something like this:

  2. public event EventHandler<MyEventArgs> MyCoolEvent;
  4. public class MyEventArgs : EventArgs
  5. {
  6. public string Name { get; set; }
  7. public DateTime WhenSomethingHappened { get; set; }
  8. }
  9. ...
  10. private void WhenTheEventFires(object sender, MyEventArgs args)
  11. {
  12. var theCoolCSharpSendingClass = (MyCoolCSharpClass)sender;
  13. Console.WriteLine("Hello World! Good to meet you " + args.Name);
  14. }

You add two or three more events, some property change and changing events, and finally a class with about 4 properties, 3 events, and a little bit of code now has 3 supporting EventArgs classes, casts for every time you need the sender class instance (In this example, I’m assuming the event is always fired by MyCoolCSharpClass, and not through a method from a 3rd class). There’s a lot of code there to maintain even for just a simple class with some very simple functionality.

Lets look at this for a minute. First, EventHandler and EventHandler<T> are simply delegates, nothing more nothing less (If you’re not sure what a delegate is, don’t sweat it, it’s not really the point of this discussion). What makes the magic happen for events is that little event keyword the prefaces the event that turns that internally turns the delegate type into a subscribe-able field. Essentially, it simplifies adding and removing multiple methods that are all called when the event is invoked. With the introduction of generics in C# 2.0, and the introduction of LINQ in 3.5, we have generic forms of most of the delegates we could ever use in the form of Action<T1, T2, T3...> and Func<TRes, T1, T2...>. What this means, is that we can change an event declarations to use whatever delegate we want. Something like this is perfectly valid:

  2. public event Action<MyCoolCSHarpClass, string, DateTime> MyCoolEvent;

And what about when we subscribe? Well, now we get typed parameters:

  2. ...
  3. private void WhenTheEventFires(MyCoolCSHarpClass sender, string name, DateTime theDate)
  4. {
  5. Console.WriteLine("Hello World! Good to meet you " + name);
  6. }

That’s cool. I’ve now reduced the amount of code I have to maintain from 4 classes to 1 and I don’t have to cast my sender. As a matter of fact, I don’t even have to pass a sender. How often have you written an event that’s something like this:

  2. public event EventHandler TheTableWasUpdatedGoCheckIt;

Whoever is subscribed to this event doesn’t care about who sent it, or what data specifically was updated, all the subscribe cares about was that it was fired, nothing more than that. Even then, in a “you can only use EventHandler delegate world” you’re still stuck creating a method to subscribe to the event that looks like this:

  2. private void WhenTheTableWasUpdated(object sender, EventArgs args)
  3. {
  4. // Go check the database and update stuff...
  5. }

If we use what we’ve learned and change the event to something like this:

  2. public event Action TheTableWasUpdatedGoCheckIt;

We can write our method like this:

  2. private void WhenTheTableWasUpdated()
  3. {
  4. // Go check the database and update stuff...
  5. }

Since we never cared about the parameters in the first place.

Thats awesome fine and dandy, but just blindly replacing every instance of EventHandler delegates to Actions isn’t always the best idea, there are a few caveats:

First, there are some practical physical limitations of using Action<T1, T2, T2... > vs using a derived class of EventArgs, three main ones that I can think of:

  • If you change the number or types of parameters, every method that subscribes to that event will have to be changed to conform to the new signature. If this is a public facing event that 3rd party assemblies will be using, and there is any possibility that the number or type of arguments would change, its a very good reason to use a custom class that can later be inherited from to provide more parameters. Remember, you can still use an Action<MyCustomClass>, but deriving from EventArgs is still the Way Things Are Done
  • Using Action<T1, T2, T2... > will prevent you from passing feedback BACK to the calling method unless you have a some kind of object (with a Handled property for instance) that is passed along with the Action, and if you’re going to make a class with a handled property, making it derive from EventArgs is completely reasonable.
  • You don’t get named parameters by using Action<T1, T2 etc...> so if you’re passing 3 bool‘s, an int, two string‘s, and a DateTime, you won’t immediately know what the meaning of those values. Passing a custom args class provides meaning to those parameters.

Secondly, consistency implications. If you have a large system you’re already working with, it’s nearly always better to follow the way the rest of the system is designed unless you have an very good reason not too. If you have publicly facing events that need to be maintained, the ability to substitute derived classes for args might be important.

Finally, real life practice, I personally find that I tend to create a lot of one off events for things like property changes that I need to interact with (Particularly when doing MVVM with view models that interact with each other) or where the event has a single parameter. Most of the time these events take on the form of public event Action<[classtype], bool> [PropertyName]Changed; or public event Action SomethingHappened;. In these cases, there are two benefits that you might be able to guess from what you’ve already seen.

  • I get a type for the issuing class. If MyClass declares and is the only class firing the event, I get an explicit instance of MyClass to work with in the event handler.
  • For simple events such as property change events, the meaning of the parameters is obvious and stated in the name of the event handler and I don’t have to create a myriad of classes for these kinds of events.

Food for thought. If you have any comments, feel free to leave them in the comment section below.

Line counting in Powershell

Quick Tip: If you want to do a line count on a project, a really easy way to do it is with a simple Powershell command:

  2. (dir -include *.cs,*.xaml -recurse | select-string .).Count

Add extension types as necessary. Note that it DOES include comments in the line count.

Here’s the stackoverflow article where this originated from, I needed something that would run outside of the main solution to take into account all the additional projects and files, so a plugin was not going to work.