Thursday, December 8, 2011

Call for STIC Talk Input

I think I'm going to submit a talk proposal to STIC for the March conference. There are lots of UI/Tools related things I could try and talk about, however there's only 45 minutes given for a talk slot. So I'm putting out a call for talk input. If you are coming to STIC this year, and have to listen to me for 45 minutes, what would I talk about to make the best use of your time?

Some possible thoughts that have crossed my mind:

  • Here's how Skins work, the issues involved, and where we want to go with them

  • Where 15 years of the VW UI framework has gotten us, how we got here, what we don't like, what we do like, and what we want to fix, and how

  • 10000 ft view of how VisualWorks UI framework fits together, do's and don'ts

  • Rolling your own widgets. The old way. And the new way.

  • Stand at the front of the room and let people throw bread rolls at me

or something completely different?

Private or public responses are fine, and thanks for them.

Thursday, October 6, 2011

Sounding Out the View Tree

A couple of days ago, I asked myself the following question:

What would something like jQuery look like in Smalltalk?

What follows is a sort of experience report and overview of what I came up with. It took me about 3 iterations to get to what's described below. I'm unsure at this point whether it's really pretty good, completely wrong, or sorta there but sorta not.


My motivation for putting this together was some disturbing patterns I see all to often in VisualWorks (and for many years). The first is the reliance on the builder variable. When the VW UIBuilder assembles an interface from specs, it hangs on to the widgets that had an ID associated with them. You can then use the componentAt: message to retrieve that in application code. Here's an example from CompiledCodeInspector.
self builder == nil ifTrue: [^self].
w := self builder componentAt: #text.
w == nil ifTrue: [^self].

There are a couple of different problems I have with this. First of all, there's the need to first check if we have a builder. Much of this type of code is writen with a "if the builders there, do this, otherwise who cares" approach. Then we turn around and do a similar check on the result to make sure it's not nil. Later on in this method, not shown, we have to send things like widget and widget controller to it to further manipulate it. #controllerAt:, #wrapperAt:, and #widgetAt: all exist to try and help smooth this out.

We really shouldn't be having code needed to retain artifacts of the assembling process to get at widgets. That information ought to be encapsulated with the widgets themselves. But instead, we have to worry about whether we have enough state yet in two different places to determine whether we can act or not. And what we get back is... well it's a Wrapper. Usually. Of some sort. But usually we want the view object. And so we have to become intimately familiar with what was constructed and how it is related to be able to get at it.

We also may have to worry about where it's actually stored at. In the case of subcanvasing, where a different builder is used to assemble subsets of UI, we end up with cases like found in the ClassCreationDialog, where we have to add an extra instance variable to keep track of those subbuilders, just so we can use the componentAt: methods. The widgets are there in the view tree, but alas, the only way to get at them is from the builder.

To see examples of the second pattern that bothers me, do this.

  1. Open a System Browser

  2. Select all Packages using Ctrl-a

  3. Select the Rewrite tool (in the mid page tab control)

  4. Enter ``@r container container in the Search for: box

  5. Click the Search... button and wait for the result

One of the more egregious examples is found in SpinButtonController
| compositeView inputBoxController |
compositeView := view container container.
inputBoxController := compositeView controller.
(view spin: inputBoxController view model value) ifFalse: [^self].
compositeView container container takeKeyboardFocus

Two times in the same method, the code walks up the view tree. Not just with one container send, but with two. There's explicit assumptions being made here about the structure of the view tree. Needing to have one widget reach out and communicate with another is not terrible. But exploiting the particular structure of the view tree violates encapsulation. This kind of a code is a huge headache for us as we consider how to move beyond and reduce the amount of Wrappers in our view trees. Getting rid of WidgetStateWrappers is actually probably achievable, but every site like this will need to be fixed. The only information we should realistically be exploiting is that "somewhere above me is an object that fulfills a role I know about and I'd like to talk to it."

Three Interesting Aspect of jQuery

First of all, I am not a jQuery expert. I've played a little with it. Read about it. Asked questions about it. In those forays, there were three basic properties that intrigued me about it:

  1. Ability to specify a criteria to match against widgets in a DOM.

  2. The idea that it is always a collectionish thing that is returned, whether it has 0, 1, or many matches.

  3. Terseness, which allows the programmer to see more code that has to do with manipulating the matches, rather than lots of code finding matches.

Building a Query Model in Smalltalk

The first thing I set out to do was build a Query model for view trees. The jQuery selectors have at least 4 kinds of general queries.

  1. Type (e.g. $("div") )

  2. Id (e.g. $("#cancelButton") )

  3. Class (e.g. $(".demoGroup") )

  4. Attribute (e.g. $("[checked ='false']") )

I started with a basic Query object. A query can be asked if it matches: aViewTreeElement. Different subclasses capture different kinds of matches.

Type Queries

These are probably the easiest to map over. DOM's have things like div and table elements. VisualWorks view trees also have different types of elements, embodied as different Class behaviors. The only real difference, is that you have inheritance in the Smalltalk world. So you might want to match all subtypes of Wrapper. This is why I called it a TypeQuery instead of a ClassQuery. You can make one like

Query type: Wrapper

Id Queries

The skin work we've been doing in the next version of VisualWorks adds an id/id: API to VisualPart. It turns out that if we insert a leading line in the following UILookPolicy method
widgetWrapperWrapping: aWrapper decorator: aDecorator component: aComponent state: aWidgetState spec: aSpec colors: colors isOpaque: opaqueFlag inBuilder: aBuilder
| sw dt |
aComponent id: aSpec name.

Once that happens, any name of a widget given in the UIPainter is applied as the id of the actual view object (not one wrapper or another). You can make an IdQuery with

Query id: #okButton

Class Queries (or Tag Queries)

In css, which jQuery leverages, you can attach one or more arbitrary "class" attributes to an element. It's like being able to keyword tag subsets of widgets. One might, for example, have a set of widgets that all close a dialog: "Overwrite All", "Overwrite Older", "Overwrite None", and "Cancel". One could set the class of these buttons to be ".dialogAction".

We currently have nothing like this in the VisualWorks view tree. But it might be nice to have. Because the word "Class" already has a very canonized and entrenched meaning in Smalltalk, I chose to use a different name for these. I called them tags. Three methods are added to VisualPart to support this:

  • tag: anObject "adds anObject as a tag to the receiver"

  • hasTag: anObject "returns whether the receiver has anObject attached to it as a tag"

  • untag: anObject "removes anObject as a tag from the receiver, if it is there"

I think my thought for these, like Id's, is that they would usually be Symbol objects, but there's actually nothing that says they couldn't be more complex objects. A TagQuery would be created with an expression like

Query tag: #dialogActions

Attribute Queries (or Block Queries)

I did not create something like Attribute Queries. The VisualWorks view tree elements don't have rigorously defined attributes. But they do respond to messages of course. So I just generalized this so that you could use a BlockClosure (actually any object that responds to cull:) to select elements. For example

Query block: [:element | element isEnabled]

But one could do weirder things with it

"all widgets in the lower half of a window"
Query block: [:each | (each localPointToGlobal: Point zero) y >= aWindow bounds center y]

would be used to select view tree elements that currently respond true to isEnabled. The astute reader will note that the previous three query's could all be emulated with a BlockQuery.

Logical Combinators

For full expression's sake, I also added a NotQuery, and AndQuery and an OrQuery.
(Query type: Button) | (Query id: #okField) "Either inherits from Button or has id of #okButton"
(Query type: Wrapper) & (Query block: [:each | each component isNil]) "All wrappers with no component set yet"
(Query tag: #header) not "Everything that is not tagged #header"

View Trees as Collections

a jQuery results in a collection of 0 or more elements that you can operate on. We already have a Collection API in Smalltalk. I wasn't interested in rolling a new or specific one of those. I'd rather leverage what I can do already with existing Collections. So the other half of this is the ability to create a collection facade to a view tree, rooted in one or more existing tree elements.

ViewTreeSet is a subclass of Collection, it implements the necessary messages and inherits all of the other services that Collection already provides. jQuery (I believe) always searches the whole DOM tree, top down. Where our focus is at the specific widget level, I did not want to restrict it to that. Two specific classes currently exist for ViewTreeSet. One goes down the view tree, traversing the child elements of the roots, and the other goes up, traversing the parents up to the topmost element. Putting these two pieces together (the Query and the Collection), these also have a slot for a query. We can ask ApplicationModels, VisualParts, and Windows for either their childViewTree or their parentViewTree. Now we can do things like compute just how many Wrappers are in a current UI?

(self childViewTree query: (Query type: Wrapper)) size

Or we could enable a bunch of widgets

(self childViewTree query: (Query tag: #dialogAnswer))
do: [:each | each enable]

Or programmatically drive the OK button

(self childViewTree query: (Query id: #okButton)) do: [:each | each simulateMousePress]

Or tell a view's appropriate parent to takeKeyboardFocus without worrying how many layers away it is

(self parentViewTree query: (Query type: WidgetWrapper))
do: [:each | each takeKeyboardFocus]

Since a ViewTreeSet is actually derived from a collection of objects, we can ask them for a childViewTree or parentViewTree as well, and then we can stack queries

((self parentViewTree query: (Query type: BorderDecorator)) childViewTree
query: (Query type: Scrollbar)) do: [:each | each invalidate]

In that example, we're first querying upwards to the nearest BorderDecorator, and then querying downwards to find all Scrollbars and forcing them to invalidate (redraw).

I found myself struggling to come up with interesting and fun examples, so I apologize if these seem too esoteric. There's a sort of Chicken-and-Egg problem here. Without the utility, you don't think about what to do with it. And then you have it, and then you begin looking around to see if it's just a solution in search of a problem, if there's real value here. I think at this point, I'm not entirely sure yet.

Trying to Sweeten Things Up

jQuery uses CSS for its selectors, which uses lots of infix/punctation characters to mean individual things and keep things pretty tight and terse. We could do something like that in Smalltalk too. But I was interested in seeing how much mileage I could get out of sticking to Smalltalk legal syntax.

This is a tricky proposition I found. Basically, I anticipate that if the code I had to input to use these were terser, I'd be more likely to use them. But without having used them much, I'm left trying to guess and anticipate exactly what the syntactic sugar I might use would look like.

I basically used 3 tricks to try and get things short and sweet (or bitter and sour maybe).

Binary Selectors

I chose to reduce the common childViewTree query: part of my expressions to the single ? character. And to use ?? for the parent counterpart. The examples from above now become

(self ? (Query type: Wrapper)) size

self ? (Query tag: #dialogAnswer) do: [:each | each enable]

self ? (Query id: #okButton) do: [:each | each simulateMousePress]

self ?? (Query type: WidgetWrapper) do: [:each | each takeKeyboardFocus]

self ?? (Query type: BorderDecorator) ? (Query type: Scrollbar)
do: [:each | each invalidate]

I've toyed with implementing ?! and ??! as well which automatically send not to the argument. Note that the do: Blocks could be shortened by taking advantage of the fact that unary selectors can be cull:'ed in place of those blocks, but we're concentrating on the query part here, not what we do with the results.

Overloading Literals

The second technique to shorten and simplify these, was to make assumptions about certain literal Smalltalk expressions. By having an asUIQuery method sent to arguments of these messages, we can coerce certain common Smalltalk objects into their Query counter part objects. I've added it to Symbol, Class, and BlockClosure. This allows us to shorten 4 of the above examples to

(self ? Wrapper) size

self ? #okButton do: [:each | each simulateMousePress]

self ?? WidgetWrapper do: [:each | each takeKeyboardFocus]

self ?? BorderDecorator ? Scrollbar do: [:each | each invalidate]

We can do blocks too

self ? TextEditorView ? [:each | each displayContents isEmpty]

The problem I run into with this part is that I haven't devised a way to be able to use Symbols directly as standins for both Id's and Tags. Since we can commute spec names to id's easily, using Symbols directly as Id matches makes the most sense right now. I've toyed with ideas like letting the first character of the Symbol mean something. And generalizing it to CharacterArray. But I could go farther and make String arguments a whole embedded syntax similar to the CSS style that jQuery uses.

Another area where this only goes so far, is that we can't put together complex queries this way. I'm not sure how often those would exist anyway. One could borrow more from the whole ? theme and add more binary selector sugar: ?& to and the argument with the receiver. But we wouldn't be able to do ?|; Smalltalk doesn't allow | as a binary selector, except for by itself.

We could abuse things like Collections too, so that one could easily assemble or lists, something like

self ? #(#okButton #cancelButton) do: [:each | each flash]

From my little playing around, I like the basic techniques here, but I'm not sure how far it scales. Or how far it needs to scale.

Adding Common Actions

Assuming that 90% of the time, we might use something like this to do some common widget behavior, I've added some of those directly to the ViewTreeSet. So then some of the examples from above can be simplified as

(self ? (Query tag: #dialogAnswer)) enable

(self ?? BorderDecorator ? Scrollbar) invalidate

Beyond Cheesy Examples

I did sit down and try to use this on some actual real existing code in the system. The ClassCreationDialog was a good case. It tries to be relatively interactive, and was written by a programmer whose skills I esteem highly: Vassili Bykov. To accomplish what he wanted with VisualWorks standard approaches (plus some technique he added), he had to not only work thru a builder, but retain a reference to intermediate sub builders, so that he could hunt down and update widgets. And a number of helper methods had to be written. I was able to get rid of an instance variable and a family of helper methods, replaced with one longer method, but more expressive (IMO). Here's a screenshot of some of the differences made:

Followed by a screenshot that shows a sample of the kind of change that is made to the longer validateEverything method

The code on the right is longer, but it is no longer so long and involved that it needs to use a helper method to do what it does.

Michael Lucas-Smith suggested I take a look at some of the methods he put together for xPressly, which was used to give his Smalltalk Solutions presentations on xTreams. He had some methods that were forced to do lots of container container container types of things. So I modified it to use this query stuff. It improves the container container nonsense, but it brings into that much more relief how silly it is that you have to go running around in other object to essentially manipulate the one you intended. Fixing that is a problem for another day unfortunately.

Current Conclusions

I don't have any yet. I need more data. There are parts of this I feel right about, other parts I'm not so sure. I'd love to get some feedback. Does this just all sound like a bad idea? Or maybe good idea in spirit, but misguided in implementation? Generally warm to it, but issues with certain details?

If you have some code in the Open Repository that you'd like to see how this would change, I'd love to take a crack at modifying it as a way of taking this stuff for a spin, if your stuff isn't too involved.

Also, I'm still looking for a good name for this stuff. I've currently called it WidgetPath, which I think is a dumb name.

Friday, September 23, 2011

One of the Best Bits of Programming Advice I ever Got

Years ago (early 1992), I attached myself to this crazy skunkworks project that was using this weird language called Smalltalk. "Object Oriented" was in its infancy as a "hot" item. High paid consultants. Lots of people laying claim to what this new object religion was all about. This was 5 years before Alan Kay would make the statement "I invented the term 'Object Oriented Programming' and this {Java and C++} is not what I had in mind."

Shortly after hooking up with this whacky group with the whacky language, still confused about what the difference was between an instance variable, a class variable, and a class instance variable, I found myself in a training course taught by Russ Pencin, of ParcPlace. Russ would say something that I didn't really appreciate at the time. Despite not understanding the point behind this sage advice, I endeavored to follow it. It would take years of experience and exposure to appreciate it's value. The advice?

Don't make objects that end with 'er'.

That's it. The OOP paradigm sprang to life amidst of a culture of what we called "procedural programming." Now days we don't talk so much about the comparison between the two paradigms. Probably in part because Object Oriented languages are now a dime a dozen. The OOP religion, in a multitude of flavors won out. Sadly, I often find myself echoing words I heard Adele Goldberg say around 2000: "Now days we have lots of Object Oriented Programming, but not so many Object Oriented Programmers". If there was one piece of advice I would pass on to the hordes of would be Object Oriented Programmers, it would be the sage advice offered by Russ: "Don't make objects that end with 'er'."

What's in a name anyway? Why is this worth getting excited about? What I've discovered over the years, is that the jst of OOP is that we bind behavior to data. As long as you haven't joined in the Functional Monks in their Monasteries of Statelessness, programs are made of behavior and data. In classic structured/programming, we concentrate on behavior (verbs), and then figure out what data (nouns) we need to make it all work. In other words, we bind data to behavior. But in OOP, we make the locus of programs be the nouns, the data, and then we figure out what kind of behavior we can bind to them, and hope that the problems we hope to solve gel out of the emergent behaviors.

I recently posited to a colleague that in nearly every "er" object case, there was a better name for it. And that giving it a better name would tend to make the design more encapsulated, less spaghetti code, in short more object oriented. It's not a hard and fast rule, but there are a lot of cases where it can improve things.

Take some sort of "Loader" for example. The focus here is on the unit of work it does. It'll have lots of instance variables, lots of arguments, and pass lots of data around probably. Now instead replace that with a LoadRecord and a LoadStream. I'm reasonably confident you'll end up with something that is more akin to what the original Founding Fathers of OOP had in mind. We want to create objects that describe what they are, and then bind behavior to them, rather than focus on what they do, and then figure out what data they'll need to do that.

Some er's that I've learned to avoid over the years:

  • Managers - Every time I see one of these, I cringe. People will usually tell me what it does, long before they can tell me what it is. Is it a registry? Fine call it a registry. Is it a history or a log? Call it that. Is it a factory? Call it that.
  • Controllers - Only good controller object I've made in the last 20 years was an interface to a BallastVoltageController that represented a real world object. The fact that every single MVC implementation in the world has had a different role for Controller ought to tell us something about how well that idea fit.
  • Organizer (and many like them) - Focus is on what it does. This is a great example of how easy it is to turn many of these 'ers' into nouns. Call it an Organization. Now we're focusing on what it is.
  • Analyzer/Renderer/etc - Definitely examples of "worker" objects. What if they had been Analysis/Rendering/etc.
  • Builder/Loader/Reader/Writer/etc - Remove the focus from the objects being manipulated, and tend assume to much responsibility themselves.
There's lots of exceptions to such a rule of course.
  • There are lots of noun words that end in 'er'. Register. Border. Character. Number. If it's really a noun, fine.
  • There are many 'er' words that despite their focus on what they do, have become so commonplace, that we're best to just stick with them, at least in part. Parser. Compiler. Browser.
  • When you are trying to model a domain object that ends in 'er'. I'm fine with a Manager subclass of Personel, which is there to refine a type of personal that has management behavior to it.
Your mileage may vary, I'm sure there are those that disagree with this. Until you apply the mindset for a while though, you'll never really know. Give it a whirl on one of your projects/designs and see what happens.

Wednesday, September 21, 2011

Don't Try This At Home: Stealing from the stack

I think there are days, when I want to do things I know I shouldn't as a programmer. Do others experience this. Some have said that Smalltalk is like a gun, that "with great power comes great responsibility." Some times, some of the tricks tempt me, and if I know no one's looking (read: I'm not going to be putting this in any production code), I find myself looking around for opportunities to flex a little bit of language super power muscle. Just for the grins. Just because I can.

Messing with thisContext and the stack is one of those things. When I was implementing the _1:_2:_3:_4:_5: message I was talking about the other day.

The proper and boring way to implement it was like this:
_1: one _2: two _3: three _4: four _5: five

| arguments |
arguments := Array new: 5.
arguments at: 1 put: one.
arguments at: 2 put: two.
arguments at: 3 put: three.
arguments at: 4 put: four.
arguments at: 5 put: five.
^(StringParameterSubstitution default)
originalString: self;
args: arguments;

But, I didn't want to do that. That was too much typing I think. I wanted to be clever, so I did this instead:

_1: one _2: two _3: three _4: four _5: five

^(StringParameterSubstitution default)
originalString: self;
args: (thisContext stack copyFrom: 1 to: 5);

There are no references to any of the method arguments. Knowing that the stack is already an array with the arguments already placed in them, exactly what I want, I just grab that, instead of making my own array populated with the method arguments.

Don't do this in production code. It's tricky and evil. But sometimes, it's good to remind yourself, or learn from others, what this great environment really is capable of doing. Who knows, having one be aware of it, there may come a point where playing with thisContext or the stack, may help you solve a real problem, in production code, or not.

A thought about field identifiers

When I was playing with template field substitution last couple of days, I was again reminded that the common Smalltalk substitution syntax (John Brant informs me that it is indeed in more than just VisualWorks and Squeak) is really frustrating to use for any sort of HTML/XML generation. The use of the < and > means you have to constantly escape those characters in your templates if you are generating any kind of output that you actually want to include the alligator brackets.

And a thought occurred to me. As long as I'm not generating HTML or XML, I couldn't think of a better field identifying character to use. There are some others such as { } or [ ] that work visually as well. But for the last 15+ years, we've all been reading more and more and more of the "when Lisp met alligators" syntax. It's been pounded mercilessly into our brains. We've all gotten quite accustomed to parsing, in our heads, the text shows up between brackets, apart from the rest. Because of that, using them as field identifiers, is actually the best thing possible.

It's the meta that's the achilles heel. When you want to use these same field separators to generate other field separators. In other words, I could posit that if HTML/XML had used { } to enclose tags, we'd find ourselves appreciating it in a substitution syntax the most, and hating it most when we tried to use it to generate more of the same.

Tuesday, September 20, 2011

A Tragedy: When Localization met Interpolation (repost)

This is a repost from my old blog. When I end-of-lifed that Blog, I said might pull some of those over here. Lukas's comments on the previous Syntactic Tartness for Macro Expansion reminded me of this one. At the time I wrote that, I failed to give credit to Steve Dahl, who I had spent about 2 days kicking ideas back and forth with on Skype about this.

Adrian Kuhn commented in a previous post:

"In general, I think that Smalltalk desperately misses String interpolation!"

Yes, I agree. But the devils running amuck in the details. The problem is that I18n translation made it first. There's two ways that I've seen to do translation: 1) statically recompile your program, making a sweep of all (or subset) of the strings in the program and translating them 2) doing translation at runtime, by looking up the string in some database. The first is pretty old school. :)

The challenges with String Interpolation are a few:

Far Reaching Changes

You need to modify the compiler. You might be able to try to have a unary message that evaluates a string, compiling expressions on the fly in the context of the sender. In this case, the tools can do nothing to help you get the code right. They get confused, because you create variables that don't appear to be referenced. You can't do it right in the context of clean closures. So a parser/compiler change is definitely in order. And don't forget the usual rant, it's not enough to modify just the base compiler. You've got to look at it in the context of the debugger's ability to put breakpoints as well as step through code, in the context of the RB parser, which you want for rewrites and formatting, as well as code highlighting maybe. This does not make it undoable. It simply means it's not a quick thing you can whip up over the weekend.

What Is It When?

So lets say you have the expression:
string := '[[customer name]] is [[customer age]] years old'.

If you use it as the argument of a nextPutAll: send, you probably want it to be in expanded form. But, if you want to look it up in an I18n dictionary, then you would rather have it in symbolic form.
You could detect string literals with whatever the expression preamble is ([[ in the examples above) at compile time and build some sort of complex literal that held both blocks as well as a string. But you'd still have to figure out how to resolve the message =. In an assert: form, you'd probably want expanded, but at dictionary look up time for translation you'd want the compressed form.

What Happened Elsewhere

It's been interesting to spend a little time looking at how Ruby and Python do these. This page makes it clear that the Ruby gettext guys also understood that it is hard to have your cake and eat it too when it comes to mixing interpolation and localization. The answer isn't as definitive with Python, but I think I've convinced myself it's a similar story after looking at docs for a little while.

Monday, September 19, 2011

Syntactic Tartness for Macro Expansion

One thing that is very natural for Smalltalk image based programming, is to programmatically assemble source and install it into the very same running program. I've been using the RB Change framework to do just that with something I'm working on lately.

To piece together the appropriate source, working with a template source and string, and then fill in the variables is something that's desirable. VisualWorks and Squeak (maybe some other Smalltalk use this same approach?) have an ability to expand templates strings with macro substitution. The template for a setter method might look something like

'<1s>: anObject
<1s> := anObject'

To fill out those fields, you send messages like expandMacrosWith:, expandMacrosWith:with:, expandMacrosWith:with:with:, and expandMacrosWithArguments:.

I am no fan of this API. First, it is too verbose. When I'm looking at template substitution, I don't want a bunch of other longish selectors. They dilute the information I'm trying to glean as I piece together the template and what's being substituted.

Secondly, I find it doesn't scale well when evolving the code. It's common that I start with a simple template in the first cut of code. Something like
    'Hello <1s>' expandMacrosWith: aName

But as I refactor and discover more needs, the need to add parameters arises. As long as I only add two more parameters, I can just use the variants with the additional with: keywords.
'<1s> ^self <2s> <3s>'
expandMacrosWith: aVariableName
with: aBasicAccessingMethod
with: sizeof + 1

As soon, as I go to 4 fields though, I have to change gears and use the expandMacrosWithArguments: API and build the sequence myself. It could be argued that it's best to just always start with this version, but it's the very longest of the selectors. If you're using VisualWorks, and don't have the language syntax for array construction (e.g. {statement. statement. statement.}), then it's even funner, because you can use the Array with:with:with:with: expression, but if you need to move to 5 fields, then you've got to change your code all around again.

Over the years, I've tried a couple of different experiments to make this all something I liked a little better. They've used involved interesting binary selectors (e.g. "%") and proxy objects, or at least fun with doesNotUnderstand: messages. I thought I'd try something a little different. I wanted something that correlated well with the numbered fields, but was uber-terse as well.

So I went with the shortest selector that could possibly work: _1:, _1:_2:, _1:_2:_3:, etc. Written in selector shorthand like that, it's pretty ugly. When actually used in code though, it improves some:
'<1s>At: anOffset
^self <2s> anOffset * <3p> + <4p>'
_1: aVariableName
_2: aBasicAccessingMethod
_3: aByteSize
_4: sizeof + 1

^Array with: (Tools.Trippy.DerivedAttribute
label: ''<2s>''
valueBlock: [self <2s>])'
_1: upperVariableName
_2: aVariableName

^(0 to: <2p>) collect: [:n | self <3s> <4p> + (n * <5p>)]'
_1: aVariableName
_2: anInteger - 1
_3: aBasicAccessingMethod
_4: sizeof + 1
_5: aByteSize

I don't dare call this syntactic sugar. The use of the underscores is too ugly to be sugary. It's a sort of bitter sweet thing, thus the "tart" label.

It does solve two problems nicely. It is very terse. You see a minimal amount of "scaffolding" getting the job done, and are free to spend more time looking at the template and the substitutions. One could get that though by using inlined arrays with a shorter selector, something like:
^(0 to: <2p>) collect: [:n | self <3s> <4p> + (n * <5p>)]' macro: {
anInteger - 1.
sizeof + 1.

One thing you lose with this though, is the strong association between each substitution and field. In the former example, when I glance at the template and see field 4, and then want to know what's being substituted, I see it instantly. I just find the 4 in the selector below. Without that direct link, I have to parse the array linearly to find it.

I haven't used this _syntax enough to decide if it's usefulness would overcome its ugliness, but it was interesting to play with it and discover the visual readability aspect. I'll likely use it on some more non-production stuff to get a better feel.

Tuesday, August 23, 2011

I've Inherited an Interesting Dilemma

Consider two browser screenshots. The first is the class side of AlphaBlendedIcons, with the needsComment method selected.

This second is the instance side of NameSpace, the needsComment method selected.

In both cases, if you invoke the menu of the selectors list, and choose the Hierarchy Implementors option, you get the following two results, respectively...

So what's the issue? The issue is the presence of ClassDescription in the Implementors browsers. Should it be there or not?

On the one hand we have the notion of "hierarchy." In RB land, a Hierarchy Browser is a pretty common thing. It is a limited view of classes always going as high as Object. If you are on the instance side, it tops at Object. If you are on the class side, it tops at Object class. From this point, where the term "Hierarchy" refers to a browsing view or scope, it is surprising to see these entries for ClassDescription here when you ask for Hierarchy Implementors. What you expect to see, is only methods that you'd find as you navigated around in the Hierarchy Browser.

Then there is a more pedantic side. It tends to be those who like to think of themselves as "I have ascended into the thirteenth circle of Smalltlak Uber-Knowledge." They know the zen-secret that there's a "crossing of the streams" that occurs at Object class. When a method is not found at Object class, the lookup for the method doesn't stop there, but continues up on the instance side at Class, then ClassDescription, then Behavior, and finally Object (instance). The little up/down arrows we place next to methods, even show this. If you look at the first one, it shows that there is a super implementor of needsComment for AlphaBlendedIcons, that is found farther up the search tree.

These two notions of "hierarchy" when viewed from the class side, are at odds with another. When someone sees that menu option, which kind of "Hieararchy" are they more likely to think about at that moment. Should it be left the way it is, but reword the menu option to be "super/sub implementors?" What would you want it to do?

Thursday, July 21, 2011

OSX Lion: First Day Thoughts

I downloaded and installed Lion yesterday. Mostly, I'm still acclimating before drawing judgements and throwing darts. But one thing stands out already.

There's this great story about how Steve Jobs convinced Bill Atkinson that rectangles with rounded corners were a good thing. The idea being that we as humans tend to prefer them and so would naturally prefer them in our UIs. The rest is history. Over the years, more and more of our UI elements get sanded off corners.

There is another pendulum that's been swinging along the Apple UI Design vector that annoys me. Color. Or lack thereof. It's no secret that Apple uses some of its frontline apps to feel out UI evolutions. Back in 2006, I remember discussing the latest iTunes version, and describing what I saw as an attempt on Apple's part to create "Lickable Motiff."

The first thing that keeps hitting me with the subtle changes made to Mail and Finder, are that they both seem to have been through another bleach cycle. Mails top button bar has had all hue rinsed from it. Finder's side bar is nearly colorless. I keep thinking "Those icons are just glyphs from some foreign language. Ignore them." Maybe I should just spend more time in Terminal and use good ol' ls.

I understand some of the influences that drive Apple designers to do this. It's easier to harmonize sets of icons if you use less colors. It helps deal with colorblindness issues. Etc.

On the other hand... I'd like to take the Apple UI designers on a walk around their campus. And point at things. And say "See? Color!" If we get rounded rectangles (problematic as they are to graphics libraries) because they show up in real life, why can't we have some color too?

Who knows, maybe everyone at Apple wears black turtlenecks now, and they all really see less color. Yeah, maybe that's it.

Thursday, July 14, 2011

Rope: an evolved Chain

One of the areas my Chain did not come out ahead was in the area of adding truly new keys to a Dictionary (as opposed to just updating what already was stored at a given key) or removing them. Both of these operations potentially involve #become: sends. While VisualWorks #become: is very fast, it's not as fast as some other operations. And not all Smalltalk implementations have a fast #become:. Here's the chart of how well Chain compared to IdentityDictionary for original key add/removal.

While I was on vacation, John Brant pinged me encouraging me to use a #changeClassTo: strategy instead of #become:. To do so, first I cloned the Chain class as a new class called Rope (and the Empty variant as well).

Then we make EmptyRope a subclass of Rope. This is important, because we need an EmptyRope to have the same instance variable layout as a Rope. #changeClassTo: will only change the type or class of an object, if they have the same instance variable layout. We'll just make sure that never actually set or reference the key, value, or nextKnot variables in the EmptyRope subclass.

In the case of adding a new key/value pair, when our #at:put: reaches the end of the list, the EmptyRope subclass can just implement it as
at: aKey put: anObject
setKey: aKey
value: anObject
next: EmptyRope basicNew.
self changeClassTo: Rope
And the only thing #removeKey: has to do in addition to the original #replace: trick is do a
   previousKnot changeClassToThatOf: self
to get the last link to be an EmptyRope.

Most of the rest of the code stays the same. We can reduce a couple of duplicates since the classes are now in a parent-child relationship, rather than a sibling one.

The at:put:/removeKey: test is the interesting one to rerun and see how things fair. First of all, we compare the new Rope approach to the original Chain one.

That shows some improvement across the board. The real test though, is how does it compare to the original IdentityDictionary?

The good news is, it's actually faster. Not by a huge amount. But it removes it from something you have to worry about.

I've replicated the Rope code up to the Open Repository (package TAG-Ropes). If you know you're using smallish dictionaries (size <= 15) and need to care about performance, then this might be the thing for you.

It would be interesting to see how such an approach fared in GNU Smalltalk or one of the other Smalltalks.

Wednesday, July 13, 2011

Syntax Highlighting your Blog with highlight.js

I've had some people ask me how I got syntax highlighting for Smalltalk code on my blogger blog. Here's what I did:

  1. Download highlight.js

    Go to Software Maniac's download page. Turn off the languages you're not interested in. Check the checkbox for Smalltalk. And hit the download button. You'll get a zip file that when unarchived has a whole directory structure of files, including the customized highlight.js file for your efforts. You need this customized version because it doesn't support Smalltalk in the stock version.

  2. Host it somewhere

    Find a net presence to host it at. Hopefully, you've either got your own or know someone who can host it for you. I was fortunate to fall into the second category.

  3. Update your blog template

    Edit your blog template (in Edit HTML link under the Design tab of your blog. You add the following lines near the end of your template

    <script src="http://yourhostedsite/highlight.pack.js" type="text/javascript">
    <script type="'text/javascript'">

    You need to edit the yourhostedsite part appropriately.

  4. Add additional CSS for your blog

    You need to choose a css style you like from the downloaded package's style directory. Then use Design -> Template Designer -> Advanced -> Add CSS (at the bottom) and paste the contents of your preferred styles .css file into that.

  5. Enclose code blocks appropriately

    To have the highlighter apply to a chunk of code, you place your code in between pre and code tags. E.g.

    ...smalltalk code snippets...

At least... that's how I remember how I did it. It's been a month or so now. Hope that helps. And would love to hear of errors/corrections in the above.

Making Performance Comparison Charts

My last post was submitted just before embarking on a multi-week multi-state driving family vacation. While I was gone, I received a couple of requests regarding the code actually used to draw the performance comparison charts in that post.

I used CairoGraphics to do it. Those that know me are likely not surprised by this. The code is just a big workspace blob, and is somewhat procedural.

One of the challenges I wrestled with was how to turn numbers the numbers into percent increase or decreases. The easy method of just dividing one value by the other gives values that are not linear when drawn graphically. Or put another way, that is twice as fast draws differently than something that is twice as slow.

I used this fun snippet to convert from aTime and bTime values into such a comparison:

(((originalTime max: newTime) /
(originalTime min: newTime)) - 1.0) * (originalTime - newTime) sign

The large workspace blob that works with a variable called percents follows below, I put comment annotations to delineate the various sections. It may serve as a useful Cairo snippet.

| max min surface bandHeight xInc text extents y |

"compute some dimensions and create the surface"
max := (percents fold: [:a :b | a max: b]) ceiling max: 0.
min := (percents fold: [:a :b | a min: b]) floor min: 0.
surfaceHeight := (max - min) * 75 min: 300.
bandHeight := surfaceHeight / (max - min).
surface := ImageSurface format: CairoFormat argb32
extent: 600 @ surfaceHeight.
xInc := surface width / percents size.

surface newCairoContextWhile:
[:cr | | matrix |

"setup a default font"
selectFontFace: 'Arial'
slant: FontSlant normal
weight: FontWeight bold.

"flip the coordinate system so we draw from the bottom up, rather than normal y is down style"
cr fontSize: 11.
cr translate: 0 @ surface height.
cr scale: 1 @ -1.
matrix := cr fontMatrix.
matrix scale: 1 @ -1.
cr fontMatrix: matrix.

"do rounded clip/border"
cr rectangle: (Point zero corner: surface extent) fillet: 10.
cr clipPreserve.
cr source: ColorValue black.
cr fill.

"shift up to base line"
cr translate: 0 @ min * bandHeight negated.

"draw columns"
percents keysAndValuesDo:
[:size :percent |
| rectangle baseHue x |
cr saveWhile:
rectangle: (Point zero
extent: surface width @ (percent positive ifTrue: [max] ifFalse: [min])
* bandHeight)
cr clip.
rectangle := Rectangle
left: (size - 1) * xInc + 3
right: size * xInc - 3
top: 5
bottom: -5.
percent > 0
ifTrue: [rectangle top: percent * bandHeight]
ifFalse: [rectangle bottom: percent * bandHeight].
cr rectangle: rectangle regular fillet: 5.
baseHue := (1 / 6 + (percent / 3) min: 1 / 3) max: 0.
cr source: (ColorValue hue: baseHue saturation: 1 brightness: 0.85).
cr fillPreserve.
cr source: (ColorValue hue: baseHue saturation: 0.85 brightness: 1).
cr strokeWidth: 4.
cr stroke].

"draw the column label"
text := size printString.
size = 1 ifTrue: [text := 'size=' , text].
extents := cr textExtents: text.
x := (size - 0.5) * xInc - extents width half.
y := percent positive ifTrue: [extents height negated - 2] ifFalse: [2].
y := y max: min * bandHeight + 2.
cr moveTo: x @ y.
cr source: ColorValue white.
cr showText: text].

"draw axis lines"
min to: max
by: 1 / 2
[:n |
cr moveTo: 0 @ (n * bandHeight).
cr lineTo: surface width @ (n * bandHeight)].
cr source: ColorValue white.
cr strokeWidth: 0.25.
cr stroke.

"draw axis"
cr source: ColorValue white.
min to: max
[:n |
n isZero
[text := (n + n sign) printString , 'x'.
extents := cr textExtents: text.
y := ((n * bandHeight max: min * bandHeight + 4 + extents height half)
min: max * bandHeight - 4 - extents height half) - extents height half.
cr moveTo: 4 @ y.
cr showText: text.
cr moveTo: (surface width - 4 - extents width) @ y.
cr showText: text]].
cr moveTo: Point zero.
cr lineTo: surface width @ 0.
cr strokeWidth: 1.
cr stroke.

"draw label"
text := '#at:put: - Chain vs CompactDictionary'.
extents := cr textExtents: text.
cr moveTo: (surface width half - extents width half)
@ (max * bandHeight - 4 - extents height).
cr showText: text].

"write the file"
surface writeToPng: 'chart.png'

Thursday, June 23, 2011

Kicking the Hash out of Dictionary

What Led to This Post

First of all, it was that I needed a break. I've been working on ARs, doing some embedded Linux stuff, working on Skinned Scrollbars, and Uniscribe at the same time. So yesterday afternoon, I thought I'd like to take a little journey.

One of the patterns that has become kind of popular in VisualWorks of late is the "Properties Dictionary". There are some drawbacks to them, but one of advantages is they allow for a sort of "poor man's instance variable pool." Using them, one can store state by by name, in a structure that isn't quite as established as standard instance variables. Package's use them. Various parts of the RB use them. VisualPart's use it.

These dictionaries have a similar usage pattern. They tend to be smaller in size. Their primary usage tends to be at:(ifAbsent:) and at:put:, where the keys are usually someone constant. Or in other words, the majority of the time, at:put: is replacing an existing value, rather than adding a new slot to the dictionary.

Which led to a bigger question even. Even without worrying about the consistency of keys, what about small dictionaries in general?

Are small dictionaries that common?

Here's a bit of code we can use to get a view on that question:

histogram := Bag new.
Dictionary allInstancesDo: [:each | histogram add: each size].
IdentityDictionary allInstancesDo: [:each | histogram add: each size].

Exporting the data and playing with GoogleDocs a bit, we come up with this:

At least in my image, smaller dictionaries actually take up more space then the large dictionaries (area under the red curve is greater than that of the blue curve). And obviously, there's a lot more of them than the larger dictionaries.

Another kind of analysis (which I did not do) would be to look at what the access rates of dictionaries are as a function of their size. In other words. Are we sending messages as often to the big dictionaries as we are the small ones?

Some Points of History

Seeing that optimizing smaller dictionaries is an interesting pursuit, is not a new idea. The original Refactoring engine sported a specialized RBSmallDictionary. It exploited the fact that for its use cases, the sizes were usually only one or two. The overhead of hashing, probing, etc wasn't really worth it for these usually-one-element dictionaries. Having their own, also allowed them have a specialized removeAll that was very fast.

When Steve Dahl added CompactDictionary for UCA, it looked close enough to RBSmallDictionary, that dropped RBSmallDictionary in favor of it. Both implementations are basically linear search dictionaries.

Over the last few releases, there have been a number of changes to hashing algorithms and Dictionary implementations in VisualWorks, primary instigated by Andres Valloud, which have changed the value of CompactDictionary/RBSmallDictionary as well. Take a look at the speed boost of CompactDictionary versus IdentityDictionary when sending #at: with Symbols as keys.
It's basically a draw at this point. The #at:put: story shows a moderate benefit for using a CompactDictionary over an IdentityDictionary when your size is 4 or less.

I'm told, that in the way-back-machine, VSE played games with optimizing small dictionaries. I don't have any more details than that though, and would love it if someone could speak truth to that rumor.

#at:(put:) Considered Harmful

The conventional wisdom to gain better dictionary performance is to work on hash algorithms. Make them faster. Make them separate better. Certainly that work will yield results. I think Andres's work has shown commendable efforts in those areas.

Sometimes we have to step out of our box and think about the problem from a different view point though. All of the improvements in hash algorithms ignore another important aspect of Smalltalk optimization. At the end of the day, all of this work on Dictionary will still result in the low level primitives for at: and at:put: running. Not the dictionary methods, but the one that is like the one found on array

anArray at: 4 put: anObject

The problem with this is that Smalltalk VM's want to be very safe. A bit of work is done every time the at:(put:) primitives fire. First, the VM wants to make sure you passed it an integer. And then it wants to convert your Smalltalk integer representation into a machine one. And then it wants to make sure that number is greater than one, and less than the current size of the variable sized receiver. That means it has to go fetch the current size first. And if that's all OK, then it wants to figure out what the real offset is from the header of the object. So it has fetch how many fixed fields it has first, and then add that to the offset. And now it can finally assign a pointer in the objects body to the object being stored at that slot. Oh yeah, there's a check in there to determine if size was really big, then the body of the object is in LargeSpace. You do this kind of work for every single at: and at:put: you do, whether indirectly or indirectly. A higher level Dictioanry>>at:put: will have to first compute a hash, then candidate insertion point, and then probe via a low level at: to see if it has a collision or existing value, and then eventually do an at:put: when it gets everything straightened out. And there's the overhead of just sending the message.

How much slower is indexed access than fixed variable access? It's easy to write a little snippet that will give us a hint

| point array index |
fixed := 3 @ 4.
array := Array new: 1000.
index := '888' asNumber.
[100000000 timesRepeat: [array at: index]] timeToRun asMicroseconds
asFloat / [100000000 timesRepeat: [point y]] timeToRun asMicroseconds

I get around 60% slower for VW 7.9 builds. But this is only a hint. It's not correct really. Because the point y expression doesn't just measure fixed var access, but also a message send. So in reality, the disparity is farther. I noodled about this for a while, I'm curious if anyone has a better way of comparing the two access methods in a truer measure. Basically, what we want to compare is the speed of the byte code to "push a known index" vs the speed of the #at: send.

A Dictionary with Fixed Slots

This is something I've wanted to try for a while. A linked list implementation where each link exposes a dictionary like interface. I tried to come up with a good name for this thing, and then gave up and called it Chain. It's actually implemented with two polymorphic classes. Here's what a Chain with two elements looks like

Chain and EmptyChain end up having the same basic set of methods. Both inherit from Collection. Most of the methods are a fun exercise in recursive programming. For example, here's the two #at:ifPresent: implementations

at: aKey ifPresent: aBlock
^key = aKey
ifTrue: [aBlock value: value]
ifFalse: [nextLink at: aKey ifPresent: aBlock]


at: aKey ifPresent: aBlock

In fact most of the EmptyChain methods do simple things like raise an error, return nil, return a copy, or do nothing. The two that are kind of fun are #at:put: and #removeKey:.

Chain #at:put:

The Chain implementation of #at:put: is simple:

at: aKey put: anObject
aKey = key
ifTrue: [value := anObject]
ifFalse: [nextLink at: aKey put: anObject].

This will handle the cases where we already have aKey in our Chain and just want to update the value stored at that key. When we have a new key though, it will make it through to the EmptyChain implementation:

at: aKey put: anObject
| newLink |
newLink := Chain basicNew.
newLink setKey: aKey value: anObject next: newLink.
self become: newLink

There's a at least two interesting things going on here. The firs is that we use basicNew to instantiate a new Chain link. The reason is that Chain new redirects to EmptyChain new by default. Since a "new Chain" should be an empty one.

Since an EmptyChain link doesn't know who references it, we use a #become: to insert the new link in the chain. We could implement these two classes as a doubly linked list, but that requires more space, more management, and VisualWorks #become: is very fast. The trick in using the #become: is that we initially set the newLink's nextLink var to point back to itself. Before we do the #become:, we have another link pointing to the EmptyChain, or perhaps it really is an empty chain, and application code is referring to the EmptyChain instance as the head of the collection. And now we have a new link pointing to itself. After the #become: call, the previous link (or application code) now points to the new link, rather than the empty chain. But we also want the new link to have a nextLink that points to the original EmptyChain. Since the #become: is two way, the new link's reference to itself, is now referencing the terminating EmptyChain.

Chain #removeKey:

John Brant and Don Roberts once shared this riddle with me. "How do you delete a single link in the linked list with a pointer only to the link you want to delete?" The apparent conundrum, is that without knowing what the previous link was, how do you update its nextLink pointer to reference the nextLink of the link to be deleted. The answer is, that you don't. Instead, what you do is turn yourself (the link to be deleted) into the next link by copying its key and value to yourself, and then updating your nextLink to the be nextLink of what you just copied. So in essence what you do, is turn yourself into a clone of the next link, and then remove the now redundant nextLink from the link chain.

This ends up being a polymorphic implementation of a private method called used to #replace: aPreviousKey. The Chain implementation is easy enough:

replace: aPreviousLink
aPreviousLink setKey: key value: value next: nextLink

When we're removing the last normal link of the Chain though, the one that points to the terminating EmptyChain, that one basically does the reverse of the at:put:, using the self referential link + become: trick to change places.

replace: aPreviousLink
become: self

Does it Make a Difference?

Chain and EmptyChain were a lot of fun to code up. Where I was interested in Dictionaries of sizes around 10 or less, it was a really fun way to approach the problem. I did go into this wondering about performance though. How does it stack up against normal Dictionary usage? What follows are a series of performance graphs for different dictionary APIs. In most cases I used Symbols, they have the cheapest hash and = implementations, so we don't get lost in worrying about the overhead of those. Most of the charts show results for Dictionary sizes ranging from 1 to 15.

Accessing Methods

The (positive) indicates that the key was found, or in other words that the exception block was not followed. (negative) indicates the exceptional block was evaluated.

The performance for the negative path deteriorates sooner, because we have to traverse the whole chain to discover it's not there, so it becomes a function of the chain size.

There's a bit of sawtoothing to this chart. It occurs (in part) because these APIs are inherently dependent on the actual Dictionary size, for the Dictionary case. Not reported size, but the amount of memory allocated behind the scenes for a Dictionary of that size. The actual #basicSize of the 1-15 sized Dictionaries is #(3 3 7 7 7 17 17 17 17 17 17 17 37 37 37).

#size is the bane of our singly linked list. For a Dictionary, it's just a return of the tally instance variable. But the Chain has to enumerate all of its links each time to compute the current size. As the chain gets longer, it takes more and more time to compute the size.


I wasn't as exhaustive with this set, I figured by now the picture was becoming clear.

#reject: is the same as #select:.


Removal is tricky to time. We need full dictionaries to remove from. It's actually the same problem as found with timing the #at:put: of initially new keys. So married the two together by measuring the time it takes to remove an existent key and then add it back.

The Chain pays a heavy price in this one because #at:put:'ing a new item involves allocating a new link, as well using #become: to insert it. As a net, that's a little slower than the IdentityDictionary which doesn't need to allocate each time.

On Beyond Dictionaries

We could do more of the Dictionary APIs, but I think the above charts are more than enough to get the picture. For things like property Dictionaries, where unique keys are determined early on, and the size is usually small, an object like Chain can offer a bit of performance boost (if you've determined you're in a speed crunch; remember "Make it Work. Make it Right. Make it Fast.").

Chain has some other interesting properties though. Unlike Dictionary, Chain can use nil as a key. And unlike Dictionary, order is preserved. At least, the order that you add the initial keys. As a Chain grows, its enumeration order doesn't change. This brings to light an interesting possibility. What if we used a Chain in place of an OrderedSet? We could make something like Chain that left the values out, or just put nil in for the values and simply use the various key APIs. For small OrderedSets we'd likely get a significant speed jump.

Why stop with OrderedSet emulation though. Dictionaries are just generalized Sequences. Or put another way, a Sequence (e.g. Array, OrderedCollection, etc) is just a special kind of Dictionary that limits its keys to a contiguous range of integers starting at 1 and ending at some size. It turns out that making Chain respond to Sequence APIs such as #first, #last, #first:, #last:, #allButFirst:, #allButLast:, #any, #fold:, #reverse, #reverseDo: is a snap.

Want an growable array that can #do faster than an array? Check this out:

In a variety of scenarios, the Chain will actually perform on par or faster with Sequences. Just don't send #size to them.

Other Thoughts

One could do a number of different things past this point. One could make different kinds of Chain subclasses for identity behavior versus equality. The trick one could use is to have the EmptyDictionary know the kind of class to use for normal links.

One could avoid the use the clever #become: trick to add and remove links, by creating private helper methods that the basic APIs called on, passing the necessary state along to do the work without resorting to clever #become: incantations.

One could play with having Chain links that stored more than one key->value pair in fixed instance variables. This would reduce the amount of link traversal via message sends relative to fixed var access, though code complexity would need to increase as it had to deal with partially populated links.

Lastly, one has to keep in mind what this exploration has been about. It's been about using fixed var access in preferences to indexed variables. And in never sending #hash to the keys, sending = only instead. The value of not using the #hash, goes down though as #hash methods get faster while = methods get slower. In other words, if you're keying with objects that have lengthy equality comparisons and fast hash comparisons, the inflection point in size where the Chain technique wins out over a traditional hash approach will shift to the left.

Tuesday, May 31, 2011

Discovering Uniscribe

I've been working on a Uniscribe interface to VisualWorks Smalltalk off and on for the last two weeks or so. One of the frustrating things for me so far, is the lack of a good post-doc resource to go to for help with this stuff. There's the MSDN docs, which are OK, but when you have questions beyond that, I have as of yet, not found any of the resources I'm used to using for this kind of thing, such as mailing lists, etc. And the amount of "tutorial" style pages out there is pretty small. So I thought I'd at least leave a trail of breadcrumbs here.

There is of course the Uniscribe Reference MSDN Docs.

Another valuable resource I found was Uniscribe: The Missing Documentation & Examples, a post written by one of the folks working on Chrome, once upon a time.

The other thing I discovered recently, is that where there are OpenType variants of UniscribeFunctions, you want to use those. For example, use ScriptShapeOpenType() instead of just ScriptShape().

Do I have a working binding between VisualWorks Smalltalk and Uniscribe? At a limited level, yes. I can deal with the standard kinds of Text emphases (#bold, #italic, #family, etc). Even #color. I'm not yet using any of the line breaking abilities (so it all comes out on one line). What can't I deal with... any string that results in more than one item. An item, in Uniscribe speak, occurs everytime the the writing system changes. So if you mix arabic and english, you get separate items for each range. But even more common, you get changes on any english punctuation. So the string 'abc,xyz' actually produces 3 items, one for the 'abc', one for the ',', and one for the 'xyz'. And currently... something isn't right with my interface when this happens. Item one is displayed correctly, but not the following items. When I figure it out, you can be sure there will be a followup post here.

Thursday, May 19, 2011


Each time I do one of these external interface things, I wrestle with the finalization problem. I have to deal with it in Cairo. In Pango. I'm currently working on Uniscribe interface, and have worked on CoreText interface. They all have this same pattern. Some external structure or opaque pointer which the library either initializes or creates for you, some functions that take the structure or opaque pointer and make stuff happen with it, and some function that frees the same. It's not enough to just use the free() interface that C provides.

The common pattern, on the Smalltalk side is that you have sort of Smalltalk object that acts as a front (facade) for your external structure/opaque pointer. And the interesting part becomes how do I make it so that when I don't need the Smalltalk object anymore, and the wonderful garbage collector makes life easy for me, erasing it from existence, that the related external resources are let go via that appropriate release/free C function.

In some part, I tried to deal with this in the Weaklings package, but the pattern is to "built on top" of the idea of weak slots. And it leaves part of the job to the programmer. Boris Popov tried to fix it in his version 18.1, but I resisted it, because it fundamentally changed the API.

I had yet another go at this in the recent ExRegex package. I don't know if this is the right solution yet, but I like it best so far. What I want is a nice simple uncomplicated, not bound up in other subclasses or frameworks, way to indicate that when an object is ready to go away, some arbitrary action happens.

The class FinalizeAction in that package, was my solution. It's a simple #ephemeron behavior (nothing to do with the ill named Ephemeron class). The ephemeral slot (the first instance variable of the an #ephemeron type object) is a reference to the object you wish to perform finalization services for. And it's other instance variable, is action. Which can be any object that responds to #value:, things like Blocks, or MessageChannels. Or even Symbols if you're using newer versions of VisualWorks (or load the simplest of all packages, SymbolValue, in older versions). What I liked about FinalizeAction was that it solved my problem with 1 class, 2 instance variables, 2 class variables, and 8 methods. Not bad. I like small solutions.

So here's some simple examples:

| b |
b := Pixmap extent: 10 @ 10.
(FinalizeAction for: b)
action: (MessageChannel new receiver: Transcript selector: #print:)

Block (note we're using a zero arg block which demonstrates we're actually using the cull: API)
| d |
d := ByteArray new: 100000.
(FinalizeAction for: d)
action: [ObjectMemory current spaceSummaryOn: Transcript]

Simple Unary Message Send
| f |
f := 'howdy.txt' asFilename.
(FinalizeAction for: f) action: #out

FinalizeActions have an instance registry to keep them alive until they fire. I have sought and sought for a scheme that doesn't involve some sort of registry. I thought I had one in the early part of the ExRegex spike, until the basic invariant of #ephemeron based finalization sunk in:

You have to guarantee that your finalizer will stay alive longer than the object its performing services for, and without some sort of other path back to the roots of the garbage collector, there is no way to do that.

I'm ruminating on where to go with this. Fold it into the Weaklings package? Or just clone it when I need it (given it's small size)? I'm also curious if the basic use API (as shown in the examples above) couldn't use some work to make it seem more natural. Putting a helper on Object would of course make things simpler, but I do try to keep this to a minimum.

Monday, May 16, 2011

Regex not so Regular?

A couple of days ago, on the VWNC mailing list, there was a discussion about the Regex11 package. Regex11 was originally written by Vassili Bykov as a Regex library for Smalltalk. It's actually been around a while and served the Smalltalk community pretty well, I think, in part due to Vassili's excellent coding abilities.

I think it's really cool that someone can sit down and write a complete Regex implementation in Smalltalk. This is part of the Smalltalk ethos. Being able to invent your own future. Write your own software, in any area you like. On the other hand, as I read the comments, I found myself wondering, "it's great that you can do this, but does that mean you should?" Of course, the answer varies. It depends on what your goals are. A certain part of me feels that I don't really want to care this much about how the implementation works. Isn't Regex one of those things that's grown up enough now, that we can just use off-the-shelf services provided by the host platforms? If we just reuse those, we might get some speed (benefits of thousands of code monkeys tuning and tweaking them over the years), and some standardization.

So I went on a little journey. Read on if what I discovered is of interest...

The Package

I put my work in a package called ExRegex. As in "External Regex", using the ExternalInterface features of VisualWorks to bind to an outside-of-smalltalk shared library. Plus I just liked the way ExRegex looked almost palindromic. And I made a Test package called ExRegex-Tests. No surprises here, pretty standard operating procedure.

I only began to implement the substitution stuff; for now, it's just good for matching. One thing I did do differently than Regex11 was employ a symmetric pair of binary selectors for doing Regex matches.
    '(a|b)c' ?= 'ac' --> true
'Travis' =? '(Senior|Griggs)' --> false

The ? character always leans towards the side with the regex pattern.

The Tests

I stole the tests from the Regex11-Testing package. I made my own SUnitToo copy of it. I soon found I wasn't satisfied with them. The original tests have a single test method. But said method fetches a clauseArray which is a series of inputs, and expected outputs. There's 3 or 4 relatively involved methods between the loop in the single test method and actually executing a single regex match.

What this means, is that when you get a failure, you don't know much. First you have to dig through the little framework of interpreting the clauseArray. When I'm trying to understand how someone's Regex framework works, I don't want to understand how their Regex test framework works as well. I just want to see very simple lines of code, the kind I would put in a workspace.

Secondly, as soon, as you hit a failure, you're done. The clauseArray has 137 entries in it. So if it fails for any reason (error or assertion failure) on test number 22... you have no idea how the others work. Is it just this one test? Or do others fail too? Is there a common pattern amongst the failures?

I think we forget sometimes, that just as easy as it is to write a little DSL or interpretation framework, it's just as easy to write a little bit of code that interprets that into real Smalltalk methods. It's easy to generate code and it's easy to compile it and it;s easy to install it. Here's what I did for this particular case:

RegexTest new clauseArray keysAndValuesDo:
[:index :array |
array size > 2
[ws := String new writeStream.
nextPutAll: 'match';
print: index.
ws nextPutAll: ' <test> '.
ws nextPutAll: ' | regex | '.
ws nextPutAll: ' regex := ' , array first printString , ' exRegex. '.
2 to: array size
by: 3
[:n |
ws nextPutAll: ' self '.
nextPutAll: ((array at: n + 1) ifTrue: [' assert: '] ifFalse: [' deny: ']).
nextPutAll: ' (regex match: ';
print: (array at: n);
nextPutAll: ').'].
RegexTest compile: (RBParser parseMethod: ws contents) formattedCode
classified: 'tests match']]

I didn't even have to bother to format the code, I let the RB services do that for me. What I ended up with was nice easy methods to read like this:

And when I run the tests from within the IDE I get nice feedback like this:

This made it much easier to unravel things that, well, started to unravel after that.

So much for Regularity

Regularity means (among others) "conforming to a standard or pattern." I'm not sure a more ironic name was ever chosen as a moniker for a "standard set" of pattern matching. The C library interface is pretty standard. You have 4 basic functions:

  • regcomp() - builds up a regex structure based on a pattern and some flags
  • regexec() - compares an input source against the regex structure for matches
  • regerror() - makes human readable strings for your regex structure when parse errors exist
  • regfree() - used to free the various chunks of memory associated with the regfree structure

I knew Regex's varied a little from platform to platform. What surprised me was that the Regex11 did stuff the C library one couldn't (e.g. \d as sugar for [0-9] and \s for separators). And vice versa, the C library variant will do [0-9]{1,3} (match 1 to 3 digits), but the Smalltalk Regex11 can't do the {1,3} thing. You'd have to live with + (1 or more) or repeat it 3 times.

But the fun didn't stop there. While both are part of the standard libc, the BSD variant that one finds on MacOSX, is different than the one finds under GNU platforms, such as Linux. And not just in their behavior. While the 4 C functions are the same between the two, and the same symbolic defines for the flags exist, they actually map to different values. For example, the option RE_NOSUB (what you use when you just care if it matches or not, not where), is a 0x08 on a GNU platform and 0x04 on a BSD platform. And the regex_t structure, which you're responsible to allocate the memory for, is entirely different between the two. For a C program using these libraries, you just use the defines, and you're fine, but for a Smalltalk (or any other not-compiled-by-the-C-Compiler/Preprocessor language), that more intimate knowledge of what the symbolic values mean, becomes more important. Getting around these differences was interesting.

Regular? Anything but. This is kind of alarming actually. It's not like these things give you an error when you try to use something like \d. It really just tries to match an escaped d (rather than a digit), and now it fails. The upshot is that you don't have much portability with these things, and as you move from environment to environment, language to language, platform to platform, you'd better have a pretty good test suite around your regexes to make sure they're still matching the way they were when you developed your original application. On this aspect, this is one time where the "do it all in Smalltalk" actually has a compelling advantage, depending on your constraints.

There's an excellent exhaustive table at this link that shows a number of common implementations, and how they compare across a whole plethora of different options.

The Speed Factor

What about the speed issue? Again, it depends. For doing simple matches, Regex11 will come out faster. Less overhead in going across the C boundaries. Let's take a regex for matching IP addresses:

If we match this against simple strings like '' and 'abc', Regex11 is faster. As much as 3 times faster. But if we match against larger strings, such as the guts of the IPSocketAddress class>>allAboutHostAddresses method (an expected match) and the result of Object comment (shouldn't match), then the tables turn quite a bit. The Smalltalk version is suddenly 210 times slower!. This measures both the time to create the Regex object, as well as do the match.

So, I guess it depends on what kind of Regex's you're doing, and what kinds of frequencies (remember, it should always be "Make it Work, Make it Right, Make it Fast").

I don't know why exactly, the pattern above actually requires the leading and trailing .* values to match the ip patterns mid source, I had to add it for Regex11, but the ExRegex interface didn't need it.