Promises is a pattern for asynchronously executing code, and following up on the results of that execution. In this post, we’ll explore promises at an introductory level, based on the Promises/A+ specification, a widely adopted standard for promises.
A note on implementation: The promises pattern can be applied in a variety of different programming languages, but requires an implementation to support it. There’s nothing magical about the implementation, and promises libraries are available for a variety of languages. This post focuses more on the promises concept than on implementation, but you can learn more about implementation through open source projects such as those listed on the Promises/A+ website.
A promise is exactly what it sounds like: a promise to do something. You aren’t necessarily promising to do it right now; you’re just promising to do it at some point. In this way, it’s a lot like a future object in Java and other languages. It’s generally meant to encapsulate asynchronous work that needs to be done. Read on…
Before coming to CommerceHub, I spent nine years—my entire career, starting as an intern—as a firmware engineer in a hardware-centric world. Since joining the Hub, I’ve learned an incredible amount about the world of software engineering. As I come up on my one-year anniversary at CommerceHub, it seems like a good time to reflect on and share some of what I’ve learned during my time here.
Recently at CommerceHub, we have been putting more focus on getting reliable performance metrics for our applications. We’ve found that the best way to do this without using production data, is to run performance tests.
My team specifically wanted to test a REST API with a few high traffic end points. The tool we decided to use was Gatling. Gatling is a load testing tool written in Scala which uses Akka to create a large number of users to fire requests at a target URL. We chose Gatling because it’s easy to set up and it produces very detailed reports.
Now that we had chosen our tool, we needed to figure out how to run our tests. We wanted to execute the tests, and be able to fail a build based on bad results. Our team uses Gradle as a build system so naturally our first instinct was to Google “gatling gradle plugin”. Unfortunately, we didn’t find anything that quite met our needs.
Seeing a need for executing Gatling tests through Gradle, we created our Gatling Gradle Plugin. This plugin allows us to publish our test results to Graphite, run multiple tests in a row, and fail if our application slowed down too much or is returning unexpected responses. We’ve employed the plugin in one of our Continuous Delivery pipelines to ensure that we aren’t releasing changes that greatly slow down our application. This plugin is now available in the CommerceHub OSS git repo.
You’re probably a visual thinker. Almost everybody’s a visual thinker.
And software is, unfortunately, terrible at making itself visible. Your options are generally to see it in motion (as a running, preferably working product) or see it as code, with very little in between. (Maybe some log messages if you’re unlucky.)
This is why CodeCity is an interesting concept. It’s a visualization environment that allows a user to graph a codebase in three (or more) dimensions. A tool from 2009, I first discovered it from Adam Tornhill’s fun read “Your Code as a Crime Scene” (CaaCS).
You’ll need a tool to analyze your code and export into this format. There are two tools that I’m aware of: inFusion (the demo of which doesn’t produce the correct output and is no longer available for purchase anyway) and iPlasma 6, which I’ve described below.
Exporting a Java codebase to MOOSE / FAMIX 2.1:
Download iPlasma. The link is on the site above, but it’s easy to miss. Here’s a deep link.
Update the iPlasma front end. iPlasma contains a pure Java Swing app called Insider, which includes a pair of launch scripts: insider.bat and insider.sh. They offer a front end onto the tool we’ll be using, but I found they need some doctoring first:
The batch file attempts to launch with a bundled version of the JRE that isn’t modern. I altered it to use Java 1.8 from my path.
Both scripts attempt to launch with a gig and a half of heap. I found that to be insufficient, so I bumped to 4G.
The tool logs to console, so I found it valuable to launch from a shell.
On launch, you’re presented with a good old-fashioned Swing UI. Select the option Load->Java Sources.
The Swing modal dialog is displayed. Click the button with the three dots and browse to the root of your code repo. It doesn’t really matter what format the directories are in or what build you use — iPlasma is looking for your source files.
Click the Open button, then OK.
Wait a while. Expect errors as a tool built for Java 1.5 tries to parse your modern code; these shouldn’t matter too much. Be aware: iPlasma caches your classes as it reads them to save time on the next read. You’ll need to clear this cache (located in a directory named “temp” in the iPlasma launch location) before using the tool in the future when you point to different repos, branches, or make other changes to the code.
After what seems like an eternity, you’ll have this rather unusual UI:
The code you want to export will be in a tab in the upper right pane. Left click the name of the folder, then Right click the same name.
A context menu is displayed. Browse to Run Tool -> Moose MSE Exporter.
You’ll be asked to give a filename. Enter one and click OK.
Expect to wait a SECOND eternity as your model is exported.
The model of a 1M LOC codebase is about 125 MB (which is why I needed more heap to build it).
I found that, for whatever reason, some classes caused the export program to fail. However, the offending class’s name was logged right before the failure. Fixing things involved closing iPlasma, removing that class’s src file, clearing the cache and starting over at Step 4 above.
Now all you need to do is load this MSE into CodeCity. That’s pretty straightforward, and in the case of a codebase our size, can take forever (20m). Once it’s loaded, you can generate a “city” based off attributes of your code. You can reconfigure these attributes to tell a better story about its complexity — such as making the height of the “buildings” in your city based off invocation or access, rather than lines of code, to illustrate importance rather than size.
There’s no Groovy, Java or Clojure support. This means our model doesn’t include a number of interesting classes, including most of our tests.
When viewing a very large city, CodeCity has a tendency to crash at any provocation — which means restarting the 20m import procedure. I have a smaller version of the full codebase I use for testing modeling options before applying them to the larger system.
Globally unique identifiers (GUIDs) are a subset of UUIDs. Microsoft uses them for identifying classes, interfaces, and other objects. In particular, I was interested in the objectGUID attribute for users in Active Directory, though the same technique should apply for all Microsoft GUIDs. Microsoft GUIDs are completely compatible with the UUID specification, with one exception: the binary encoding. In particular, RFC 4122 specifies that all segments must be encoded as big-endian, while Microsoft encodes the first 3 segments using the system’s native endianness (which for Windows is commonly little-endian).
If you’re getting access to the GUID as binary data (such as a byte array), you’ll need to do some processing before passing the data to the UUID class. Balazs Zagyvai‘s adsync4j project provides a useful example of this (see gist below). It also uses UnboundID LDAP SDK for Java, which I highly recommend if you’re accessing LDAP servers (including Active Directory) in Java.
Over the past two years, CommerceHub has been diving head-first into config management with Chef. We’ve come to realize that establishing a consistent workflow and common patterns when working with Chef across product teams can save us some pain.
When adopting a service-oriented architecture, one of the things we needed to do was define the data format each service endpoint takes as input and produces as output. In some cases, the input is simple and easily represented as URL segments or query parameters. In most cases, however, either the request body or response body needs to deal with a richer data object. For requests, this data needs to be parsed, validated, and coerced into a format that the application logic can use. For responses, we need to be able to generate the desired data format from the objects generated by the application logic.
When managing servers with Chef, sometimes it’s useful to trigger a run “right now.” One of our use cases was triggering a Chef deployment of an updated application as part of a continuous integration job, prior to running acceptance tests. One of the most common ways to trigger a run is `sudo chef-client`. You may also have stumbled upon the option of sending a USR1 signal to the chef daemon process (`sudo killall -USR1 chef-client`). Depending on your configuration, the `sudo killall -USR1 chef-client` approach may not trigger the run immediately, as it uses the configured “splay” to wait a random number of seconds before running. Both of these approaches require root privileges, which may be problematic in automation scenarios.