Monday, October 6, 2014

Initial Thoughts about Swift

Back in June, Apple announced the new language platform, Swift. I took a brief look at it and then put it aside for the time being. I figured the early adopters could vet things out without me, run around in circles of hysteria for a bit.

Fast forward a few months, iOS 8 and Xcode 6 are both official now, no longer beta. I have a prototype app written in Objective-C, that it's time to kind of do version 2 of. Is now a good time to try and grab hold of the train and jump on? I don't know yet. I'm trying to keep an open mind. Here's a couple of thoughts so far...

Out of Date Information
One of the most difficult bits, is that a lot of the information out there on the web is already slightly (or very) obsolete. Much of the initial traffic generated after the announcement was quick from-the-cuff responses, often just there to generate clicks. Looking for really useful and helpful content, one still must filter through a lot of noise. Hopefully, with time, the ratio will tip in favor of the more useful.

I found this to be very true with the first batch of tutorials that were quick to hit the streets. Most are just so flat out trite, they don't really teach you anything. An app that is a little more than "Hello World" doesn't really teach you much. Furthermore, many of them have issues in their purported source code that prevent you from finishing if you can't figure out what's changed (a common example I saw was issues with the optionals mechanism).

In the end, I found a Tetris game tutorial that I was actually able to complete (Swiftris). As a tutorial it's OK, it has a fun flippant style, and it actually goes through a bit of stuff. And it distinguished itself by not being overly trivial and actually producing a working program. Most of the following thoughts were generated from this tutorial, as well as reading quite a bit of the Apple Swift Docs.

Replacing one Legacy with Another
Back when Craig Federighi announced Swift, one of the catch phrases was "Objective C without the C". Computer language historians will describe Objective-C as a somewhat unholy union of two very different languages: Smalltalk and C. While somewhat effective, it's a weird experience. Being accomplished at both Smalltalk and C, I can personally attest to Objective-C's weirdness. I often giggle when coding Objective-C. In an ideal world, you'd be thrilled at being able to leverage the best of both worlds. But it's usually the case that each is holding the other back in amusing or annoying ways.

So supposedly Apple ditched the C. It's a typed Object Oriented language with a syntax that us more C like than the Smalltalk keyword style. In the end, I'd say they ditched both, and it's just its own language.

But like Objective-C which always had to play fiddle to its C heritage, Swift plays fiddle too. You may see posts pitching things as better or innovative, but what I perceive often is "since we're still using the Cocoa runtime, we had to come up with something." So while C is gone. The Cocoa libraries and runtime are not. And ultimately, Swift has to bend to fit that model. Just like with Objective-C where I would giggle at how C would force something silly into the marriage, I find myself asking "why did they do that??" and usually the answer is "ah, because the Cocoa runtime forced their hand there."

The Law of Conservation of Ugly wins again.

Head Turning Paradigm
One of the things that messed with me at first, is that Swift function signatures are backwards. In C, a function signature/definition might look something like:

  float doSomething(int arg1, double arg2)

Or in a more abstract sense

  returnType functionName(typeQualifier1 argName1, typeQualifier2 argName2)

This is a pretty common pattern in many languages. But in Swift, there's a game of musical chairs that is played so that we end up with things in different order. The equivalent Swift variant is

  func doSomething(arg1: Int, arg2: double) -> float

And put abstractly

  func functionName(argName1:typeQualifier, argName2:typeQualifier2) -> returnType

I don't know how I feel about it. It's different, so it kind of feels fresh and new. OTOH, my brain has spent a lot of years learning how to scan the opposite order, where qualifiers precede what they annotate, rather than post cede them.

:, :, and :
One of the things that makes the C part of Objective-C annoying, is that C with its many years of evolution can often feel complex to parse. You have to look at the context to figure out what a given character does. However, the : character doesn't play to much of a role in C. While cleaning up use of other infix characters, Swift decided to celebrate the : character.

So far, I've counted at least three different uses of this character that I have to press the shift key for (Swift has relegated the easier to type semicolon to near nothingness, using it as a
multiple; statements; on; the; same; line; separator just as in Python).

The first, as shown above, is that it is used in function signatures to "attach" the type of an argument to the back side of it.

The second, is that it can be used when calling a function (or method). It looks subtly, similar to a keyword style invocation.

  Point(x: 4, y: 2)

Yes folks, Swift lets you type not only a comma separated argument list (C style) or a colon delineated keyword list interspersed with arguments (Smalltalk style), but you get to (must) do both! Type out the function name, the parens, the commas, the keywords, and the colons. It's like a politically correct function signature. It's so all inclusive.

What I find particularly disingenuous about the readability of this though, is that it undos what I just got used to. I had decided that : was how I attached annotating information to a keyword (e.g. the type), but here the annotating or qualifying element precedes it.

This is not a show stopper. But what it means is that your brain can't use a simple pattern match to put the pieces together.n You can't see a code and instantly know if you're looking at a function definition or call. Instead you have to parse the surrounding context to figure out what you're seeing.

The third use, is to indicate that a list (array) is not a list, it's a dictionary. Dictionaries and Lists both start and stop with the [ ] characters in Swift, and the elements are separated by commas. But to figure out whether it's a literal Dictionary or List, you'll have to peer inside of it, scanning it's contents to see if you can find a :. If you do find one, then you have a Dictionary. Then please scan back to the beginning to see if it all started with a [ or a (, so you can disambiguate whether it was a function call, or a dictionary.

Keep on open mind, I keep muttering to myself about this one. Maybe some zen unifying principle will befall me eventually, and I'll see the wisdom in the ambiguities.

Wrap, Unwrap, Ugh
Objective-C uses the nil message eating pattern. You can send messages to the nil object, and things don't blow up. It just silently does nothing. It's not really Objective-C per se, it's the implementation of the Objective-C runtime engine. And since that doesn't go away with Swift, they had to allow for that kind of thing with Swift. The solution is to support optional types.

  var TheAnswer:Int? = 42

That says that TheAnswer can be an Int or it can be a nil. Anytime I want to access it though, I have to know that I declared it with a ? and use a ! to get the value out. But if that bothers me too much, I can live a little dangerously and declare it as

  var TheAnswer:Int! = 42

This says that it must be an Int, but it recognizes that until I get it initialized it might not yet be, so I'll have to be careful.

This need to paper over the Objective-C/Cocoa patterns of nil, is one of those cases where I see the legacy as compromising the new. Maybe I'll be wrong and become a big fan of the optionals system. So far though, the compiler is constantly nagging me to add !'s or ?'s here and there. Sometimes, I don't understand entirely why. So I'm not sure it's a productivity winner for me at all yet. We'll see how the lay app developer of iOS apps deals with it.

Finally for now, with all this type goodness, and the improved completion and playground, I was surprised to find that if you highlight a chunk of Swift code and choose the Xcode Refactor menu option, you'll be rewarded with this wonderful message:

Hopefully, we'll see that go away as Apple continues to mainstream its new darling language.

Monday, September 29, 2014

Dipping Toe in Water

Two plus years ago, I decided to take a hiatus from blogging. In addition to a cessation of blogging, I also delisted from Facebook and Goggle+. Dropped off of a bunch of mailing lists. Sort of walked away from an online persona I had spent a bit of time curating. At one point, I had always thought, maybe I'll write a "year later" post to try and put some of my life/career changes that were going on in perspective.

A year came, and I thought about it. But thoughts didn't gel enough, so I put a retrospective aside for the time being, and went back to work. I was happily coding in C with 8K of RAM by that point. Smattered with Python. And then another year went by, and I was amusedly coding in iOS by then.

This morning, I was working my way through a tutorial on Swift (Apple's new language), I thought I'd have a go at it again. It probably won't be (much) about Smalltalk; I have done very little of that in the last 2 years. But I do miss the cathartic process of journaling my passage through programming.

We'll see how it goes.

Friday, June 1, 2012

Some farewell thoughts/code on widget layout/placement

A couple of releases ago, an object called Panel was introduced by me to VisualWorks. New UIs were put together with it, including among others: BundleOrderTools, PrereqTool, and new Change Tools, as well as the new Skinny widgets.

Panel was a hard swing away from the traditional VisualWorks layout facilities. It took the position that layout was entirely a container responsibility (whereas the traditional framework puts a lot of emphasis on encapsulating the layout parameters of a widget with it on a one-to-one basis). The advantage of doing it this way, was that you could build layout algorithms that took into account the interplay between the different children better. It was a hard swing, because rather than a rich set of as-yet-understood-ideas, I made egregious use of blocks to pull it off.

A couple of months back (maybe 6 even), I sat down and began playing with some ideas that were a little more "half way." Having done VisualWorks for many years, having explored ideas with Panel, I was interested in addressing the following ideas:

  • Neither Panel nor CompositePart have a good way of differentiating between what their ideally composed size would be from what their layout ended up being. In other words, the preferredExtent of a Composite is just whatever it ended up laying things out as.
  • While various widgets in the system can answer preferredHeight/preferredBounds/preferredWidth/preferredExtent, they don't deal with the need to some times set or tune these values on a per instance basis
  • 95% of layouts are follow one or two axes (e.g. i want a "row" of buttons)
The unfinished product of this is published as WidgetRowsColumnsAndWeaves, replicated into the OR.

It has two primary types of classes in it. One set is the ViewStripe and ViewWeave classes. ViewStripe is a one axis layout container. It can be configured as a row or a column. And ViewWeave is a two axis container, what some might think of as a LayoutGridBag or whatever Java calls those things.

The other half of the classes, are the EdgePlacement classes. These are the worker objects that a ViewStripe or ViewWeave might use to place it's child widgets. They make heavy use of the RectangleEdge classes that were integrated in VisualWorks 7.9.

One of the things I realized when working on this, was that when I think programmatically about widget placement, I *don't* think in rectangles. A widget's frame may be a rectangle, but I don't compose them that way. I think in axes. I might a though process that goes something like "I need a row of widgets. Vertically, I want them all to to hang from the top, inset down by 2 pixels. And horizontally, I want them evenly distributed, so that they're all the same size, with a gap of 4 between them, and edge insets of 5." See how the reasoning is about the axes separately?

So when you configure the layout of a VisualStripe or a ViewWeave, you see messages like

row leftRight stretchAll
row topBottom alignTop

The leftRight or topBottom message will return either a SingleCellPlacememt or a MultiCellPlacement depending on how it's configured.

There are a variety of tests and examples in the code that are worth perusing. Here's a piece of the ViewWeave exampleTicTacToe method:

me leftRight stretchAllCells.
me leftRight perCell alignCenter.
me topBottom stretchAllCells.
me topBottom perCell alignCenter.

That single "two-scrollbar-long" method is able to define a complete TicTacToe game. There's also an exampleCalendar which actually does quite a bit more with the features in there.

I used properties a bit to pull some of this off. You can attach a #cellWidth or #cellHeight property to a widget that can be used to tune it's cell size along an axis. You can do things like set #cellBackground and #cellBorder as well.

Over all, I was somewhat pleased with this. I regret that I won't be staying around to see it to completion. Maybe someone will pick it up and run with it.

Tuesday, May 29, 2012


As promised, here's one of those "tying up loose ends" things.

A couple months ago, I posted a prototype of some code inspired by JQuery like behavior for VisualWorks view trees. It got a variety of feedback. A lot positive, some skeptical, some mixed.

Recently, I've been working on modifying the newer VisualWorks comparison tool that presents changes in a disclosure/rollup navigation style. I've been working to put filters in it, so you can filter out different kinds of changes (e.g. hide all the category changes, or show just the additions).

I found myself wanting that JQuery like behavior to have lightweight communication between view tree objects again. I did not want to fabricate a model with dependencies just to facilitate some cross talk. So I took a "go around #2" at the idea. This time, no funky syntax. More based on real world needs. Definitely lighter weight.

It's been published as QueryTwo to the Open Repository. What will come of it? I don't know at this point. Maybe it'll get integrated, maybe not, that's up to others to decide now.

Here is an example of me using it in real life (instead of hypothetical examples).



(self query)
type: AbstractComparisonRollupView;
do: [:view | view hideTypes: hiddenTypes].
self updateCellFills


It reminds me a lot of writing Glorp queries. Similar patterns, you create one, send messages to configure it, and then enumerate it. Or kind of like Seaside html writer pattern too. Make one, configure it, execute a block for it.

What follows is a portion of the class comment that describes usage:

The most common case is to ask a VisualPart or Window to create one for you using

self query

This will return a query that has the receiver as the root of its query path. One can send top to the query to shift the root of the query to the top most element of the view tree (e.g. the window at the top).

You can also ask an ApplicationModel to create one

myApplicationModel viewQuery

The pattern is that after creating a query, one sends configurations messages to it, and then invokes various collection methods (e.g. do:). The enumeration methods should be sent after configuration methods. There are couple of different methods that govern which objects in the view tree are traversed, they come in pairs:

Traversal Configuration Messages

up - causes the query to proceed upwards through the parent path from the root object
down - causes the query to proceed downwards through the children of the root object (this is the defaut if neither up nor down is sent to configure the query)

withRoot - causes the query to include the root in its traversal
withoutRoot - causes only parents or children (as configured by up/down) to be traversed (this is the default if neither withRoot or withoutRoot is sent to configure the query)

one - causes the query to cease traversal after the first match is found and enumerated
many - causes the query to traverse all elements, matching as many as encountered that match (this is the default if neither one or many or sent to configure the query)


Adding queries controls which elements of the traversal are "matched" and thus show up in things like do: enumerations. By default the query will match everything. Methods found in the queries method category provide utility methods for setting up some common queries. Ultimately, they all pass through the addQuery: method. The argument to the this method is a block, which is cull:ed for every element in the traversal, and for those that answer true to this block, they will be enumerated. Repeated query configuration messages, will AND the queries together. The method reset will return the query to its default [true] state.


(self query id: #foobar) any
"will return the first element with id of #foobar"

(self query top property: #frame satisfies: [:rect | rect area isZero]) not invalidate
"invalidate all widgets in my window that have real area to them"

(self query up withRoot) do: [:each | each flash]
"flash me and all my parents"

(self query top hasProperty: #UpdateGroup) do: #update
"send #update to all elements in my window that are marked with the #UpdateGroup property"

Stepping Out of the Balloon

Many many years ago, I returned from an LDS Mission in the lovely country of Norway. It was Christmas of 1991. I took a job at what was then Siemens Nuclear Power. In the months that followed, I was introduced to this novel computer programming language called Smalltalk. I took to it, and I like to think it took to me. For the last twenty years, I've done quite a few things with it. From writing nuclear fuel design and assembly software (which are still running today) to making sure that the french fries, green beans, and much of the rest of the world's food is a little cleaner and better. From numeric modeling to implementing frivolous things like roman numeral message selectors and goto. And quite a bit of toolsmithing. To say it's "served me well" is an understatement.

The wonderful world of Smalltalk technology and philosophy, wouldn't have been as enriching for me, if not for the wonderful community of people I have rubbed shoulders with over the years. I remember my first post via bitdearn to comp.lang.smalltalk back in 1994. Meeting people at conferences such as OOPSLA, ESUG, STIC, and others. I have made a ton of friends and come to admire the work and enthusiasm of so many people.

Working at Cincom, the "original" commercial Smalltalk vendor, has always been a sort of pinnacle in my Smalltalk pilgrimage. A chance to be at a hub of where Smalltalk was happening at.

But all journeys must come to an end. And the time for this journey, for me, for now, has come to an end. On June 4th, I will begin work at Nelson Irrigation, doing embedded automation work, sprinkled (that's a pun) with a variety of end user application work. I am super excited. It's a neat project, a neat company, and an indescribably neat culture.

But it means I'll be dropping out of that central involvement in the Smalltalk community. I may still do some Smalltalking for sure, and the ethos that is Smalltalk will permeate all the work I do, but it's unlikely I'll show up at a Smalltalk conference in the near future or be active in the mailing lists as a heavy contributor.

And so it's in some ways, a probable good bye for me. And that makes me sad. And yet happy, because it's better to feel sad about what I'm losing with the community, than thrilled to be shot of it all.

I also want to point out something my departure from Cincom does NOT mean. There have been some others prominent names that have left Cincom recently, and one might assume there was a sinking ship meme going on. Such is simply not the case with me. The timing of this opportunity to learn and be involved in some new and different things, was out of my hands. When it surfaced, unfortunate timing aside, I felt I could not pass the opportunity up. So please don't read any sort of ill boded fate for Cincom or VisualWorks into my departure. I have faith in the people that remain, and in the people that will replace me, I'm sure they'll take the balloon farther and better heights than I was capable of. Any age or oddities aside, it remains some of the best tech that is out there.

As for this blog, I'm not sure what will happen. The purpose of this blog was always meant to be about Smalltalk, and in particular the live "biological" nature of the Smalltalk program philosophy. There are one or two things of the normal ilk that I'd like to write about based on some work I've been doing of late, and then, it'll likely take a hiatus, possibly permanent.

If our paths don't cross in the future, in the immortal (and skewed) words of Spock, may you "Learn Long and Prosper." And remember, "Dead men never wish they'd spent more time at the office."

Tuesday, May 15, 2012

Caching and The Way of the Object

Lately, we've had an internal debate about how to make some places where we do sorting go faster. Of course, there's always the caveat: Make it Work, Make it Right, and if you need to, Make it Fast. Learning when you need to care about that last step, is always a sort of art based on experience. Often the answer is simply "all other things considered, it's fast enough, I've got bigger problems to solve elsewhere."

Let's take an example though. Take a collection of Class objects (we'll leave the MetaClasses out), and sort them by their response to toolListDisplayString:

| classes |
classes := Object withAllSubclasses reject: #isMeta.
classes sorted: #toolListDisplayString ascending

In our example, we're using VisualWorks' ability to substitute simple Symbols as replacements for BlockClosures that send a single unary message to their arguments. It is equivalent in behavior to the more traditional:

| classes |
classes := Object withAllSubclasses reject: [:each | each isMeta].
classes sorted: [:each | each toolListDisplayString] ascending

Sending ascending to a BlockClosure before using it was first developed back in this post and this following post. And that was then integrated into VisualWorks 7.8 (or was it 7.8.1?).

The problem with our example, is that the method toolListDisplayString is not cheap. It's more than just concatenating strings for class and namespace names together. It looks at how much context needs to be added to the class name by itself to make it unique. Or put another way, since there are multiple classes in the system with the name Text, it determines it must add some info about the containing namespace, while the class PostgreSQLEXDIBLOBManipulationOutsideTransaction probably only has one instance and doesn't need any namespace info to contextualize it.

The core default sort algorithm in VisualWorks is hybridization of quicksort and a insertion sort. The implications for this, is that this somewhat expensive toolListDisplayString method may be called repeatedly for some objects. That means redundant CPU cycles.

A common solution to this kind of problem is memoization. Memoization basically is a fancy word which means "cache the results of your computation function, so you only evaluate the function once for each unique input and just look up the cached result for subsequent calls."

How to go about doing memoization around sorting sites can be accomplished a number of different ways.

In Place

The first and simplest way is to simply do it right at the sort site. We could rewrite our example to read:

| classes |
memory := IdentityDictionary new.
classes := Core.Object withAllSubclasses reject: [:each | each isMeta].
classes sorted: [:each | memory at: each ifAbsentPut: [each toolListDisplayString]] ascending

This is the simplest thing that could possibly work. That's its single advantage. The disadvantages is that adds a bit code bloat for every place we decide this is worth doing. It intermingles with what was otherwise pretty simple and easy to read. To flip back and forth between memoized and non-memoized is a pain. And it gets done again and again and again at each call site, so there's no real reuse involved. The risk of implementing it wrong is retaken at each implementation.

The desire to be able to easily flip back and forth between memoizing and not, shouldn't be underrated. Memoization is not free. It costs cycles. It is usually trial and error under conditions that the programmer knows to be common for his code, that determine if the overhead of memoizing is less than the cost of the original redundant function.

This technique is best for those that like to write more code. If you like to brag about how much code you've written, how many lines, classes, or methods, this might be for you. It's simple, and you can demonstrate your superior typing speeds.

More Sort Methods

Another approach is to add a new sort method. VisualWorks already has sort, sort:, sorted, sorted:, sortWith:, and probably some I've missed. Application developers tend to add one or two of their own. A common one in the past has been sortBy: which supports using a single arg block. So you figure out how many of these APIs you want to replicate as memoized alternatives and implement them, for example: memoizedSortBy:, etc. This is if you're a good citizen. If you're not so kind, you use something that looks like just another general purpose sorting API (e.g. sorting: aOneArgBlock).

Implementing memoizedSortBy: gives you the advantage of optimizing things a little differently. You can choose to build a parallel vector of objects collect:ing for the function, retaining index information, sort those, and then basically apply those indices to the original input set. Or you can just go with the Dictionary and at:ifAbsent: approach.

Now the only change we need to make to our call site is to change it to:

| classes |
memory := IdentityDictionary new.
classes := Core.Object withAllSubclasses reject: [:each | each isMeta].
classes memoizedSortBy: [:each | each toolListDisplayString]

You'll note that we don't have ascending anymore in there. The SortFunctions stuff is basically incompatible with this approach. Since this API wants to work with single arg blocks, which it's memoizing the results for, it has hard coded the sort direction inside of it.

I consider this the C Programmer's (or procedural) Approach. If at first you don't find a function, try, try, another one. That it is in this simplistic form incompatible with the SortFunctions thing, is personally aggrieving to me (we lose the elegance of setting the direction, as well as chaining functions, or deriving our own rocketship sorts). Another disappointment is that it's one more API I have to figure out which one I should use. I see a family of sort methods, and I've got to figure out (or recall) what the different nuances of each are (this one takes one arg, this one takes none, this one takes two, each has different trade offs, etc).

Finally, it limits the technique of memoization to sorting. What if I want to use memoization for collect:ing over a collection that I know has redundant elements. In that case, I have to go back to the In Place approach.

The Way of the Object

I'd rather take a page from the SortFunction technique. BlockClosures (or more generally, objects which respond to the message value: and fill the roles of functions) are real Objects too. And I'm declaring that they too have a right to be included in the General Love Fest of Polymorphism. The idea here, is that we add a new memoizing method to BlockClosure (and Symbol too so they can continue to stand double as simple BlockClosures). Sending memoizing to a BlockClosure returns a MemoizedFunction object which can do value: just like a BlockClosure. But it keeps a memory of evaluations and uses those when found. My first cut implementation is published as TAG-MemoizedFunctions in the Open Repository.

Now our example just turns in to:

| classes |
classes := Object withAllSubclasses reject: #isMeta.
classes sorted: #toolListDisplayString memoizing ascending

For this simplistic example, slapping memoizing in there is a 10x speed boost.

What do I like about this approach? First of all, it was fun. This kind of thing, to me, is where the zen of Object Oriented dispatch is at (I don't pretend to be brilliant about this at all, Wilf LaLonde probably wrote an article demonstrating this 20 years ago). I like that it is terse. I like that it localizes the decision about whether to memoize around the function itself rather than the API using it. This is the fastest/easiest way to toggle memoization on and off to profile the differences. I like that I can use it with collect:, or detect:, or allSatisfy:, or any method that makes use of first class function objects. And I like that it only took 10 methods and one class to do. Because Less is More.

Happy Memoizing!

(Why does Apple insist on constantly changing "memoizing" to read "memorizing"? Grumble...)

Monday, April 30, 2012

Smalltalk meets Cubism

Everytime I meet up with Alexandre Bergel at a Smalltalk conference, we talk about Mondrian. And I always ask him a question: "Why is it that every time I see Mondrian, it's always about Rectangles?" In some ways, it's appropriate that Mondrian is always about rectangles. Google Mondrian, and the first images you'll see are all about rectangles. Nearly all of the artwork associated with Piet Mondrian is a love affair with rectangles.

Anyway, I thought it was time to put up or shut up. I want a new question to bug Alexandre about when we cross paths in the future. So I decided to play a little. I wasn't interested in rewriting all that Mondrian is. I just wanted to experiment a little with other polyshapes to express multiple simultaneous attributes of subjects I was trying to visualize. Mostly, I was interested in playing with MeshGradients using Cairo, because I was curious if I could find an interesting problem I could use mesh gradients with.

If you google Cubism, you'll see the artwork that I was inspired by. According to Wikipedia

In cubist artworks, objects are broken up, analyzed, and re-assembled in an abstracted form—instead of depicting objects from one viewpoint, the artist depicts the subject from a multitude of viewpoints to represent the subject in a greater context.

That sounded exactly like what I was trying to do. I published my couple-day-prototype in the Open Repository in a package called TAG-Cubist.  I realized in playing with this how it's not just about rectangles (of course), but how important layout of the per-subject-graphics is. I didn't do anything, other than present then in a tiled format. I'll leave that kind of thing to others. Here are screencaptures of the four example methods I put on the Portfolio class (a Portfolio is a collection of similar drawings for a group of different subjects).

Some of the methods found in the Portfolio object, showing clockwise, from noon high position: LoC, inst var ref count, selector size, argument count, and bytecodes.

Some of the top-level packages found in my image, showing clockwise, from noon high position: prerequisites, defined classes, extended classes.
Classes found in the ArithmeticValue class hierarchy, showing clockwise, from noon high position: methods, inst vars, refs to globals, global refs to the class (attributes suggested by Bob Hartwig, thanks!).

Top-level bundles in my image, showing clockwise, from noon high position: child packages, child bundles, prerequisites, comment size, defined classes, methods of defined classes, extended classes, extension methods.

If I was to go on playing, I'd basically start to reinvent Mondrian. Which is not something I really wanted to do. I might play with the way the shape is generated some more, make it more of a star graph (whereas it's a sort of spider plot right now). And I'd definitely figure out how to do a legend plot.