home
about
services
contact

Where am I?

MrCombo v1.0

December 13, 2011 at 02:15 PM | categories: iOS | View Comments

I have submitted a new iOS application to Apple: MrCombo. This app lets you build lists in a two-level hierarchy: combo > group > item. Then you can shuffle the items to mix it up. Sample data includes ordering a pizza and picking an outfit, but you are meant to create your own combinations.

Yes, it's a bit silly. But I find it fun and useful so I thought I would release it. For now MrCombo is free, but if I keep adding more features that may change.

You can read more about MrCombo or get MrCombo on iTunes.

If you run into problems with MrCombo, you can reach me through the App Store or post a comment right here.

Read and Post Comments
 

AlbumMixer v1.10 and iTunes Match

November 28, 2011 at 01:00 PM | categories: iOS | View Comments

I am hearing early reports that AlbumMixer v1.10 has problems with iTunes Match and playlists. So far I haven't been able to reproduce this, but I am working on it. If you see any problems and can provide details, please comment here.

Update: Based on additional user reports, plus a lack of crash reports from Apple, I think this is an iOS or iCloud issue. Because of this and other current issues being discussed on Apple forums I cannot recommend signing up for iTunes Match at this time. If you decide to risk it, start by backing up your entire iTunes library. Apple seems to be refunding subscription fees to some users, but that won't be much of a consolation if you lose a large media library.

Read and Post Comments
 

Testing AWS EC2 Instances for Co-Residence

November 21, 2011 at 01:00 PM | categories: AWS | View Comments

Have you ever thought about setting up a cluster of software nodes using Amazon's EC2 infrastructure? You may have wondered what happens if Amazon's underlying hardware should fail. That hardware runs the hypervisor for your EC2 instance, and if it fails so does your instance. If you have a cluster of N nodes, what happens to the cluster?

Most cluster software includes some mechanism to handle host failures. But the cluster was usually designed to run on real hosts, not virtual machines. In most cases a hardware or software fault should take out exactly one host. But in a virtualized environment the same fault might affect more than one virtual machine. This is especially true in a cloud environment like EC2, where you have no visibility into the hypervisor.

At first this may not seem to be a problem. If you create four EC2 instances at the same time, they usually seem to start on different hypervisors. So a hypervisor failure will only affect one instance, and the cluster failover mechanisms will operate as usual.

But when an instance does fail, you will want to replace it. How can you ensure that the replacement instance won't be co-resident with any of your existing instances? If the replacement instance is co-resident with an existing cluster instance, and another hypervisor fault affects both instances, the cluster will go down. Depending on the circumstances, you may lose data too.

So how can we prevent this? One idea is to include a pool of spare instances when we create the cluster instances. But this makes the cluster more expensive. We cannot stop the spare instances either, because when we start them again they may end up co-resident with another instance. Over time we may also exhaust this pool of spares, leaving us with the original problem again. How can we avoid co-resident cluster instances?

If we could detect co-residence, we could simply create instances until we get one that is not co-resident. This approach is crude, but should be effective.

  1. Create instance (or start existing stopped instance).
  2. Check co-residence for each existing instance.
  3. If co-resident, stop the instance and return to step #1.

Any instance that makes it past step #3 is not co-resident with any existing member of the cluster instance, and can join the cluster. This leads to a new problem: how can we test for co-residence?

AWS does not appear to provide an easy way to do this. Allowing instances visibility into the hypervisor could lead to security holes. This problem has been discussed by Ristenpart, Tromer, et al. My own experiments suggest that AWS has changed its network topology since 2009, but I did find a technique that seems to work.

From what I can tell, each AWS hypervisor acts as a gateway for its guest instances. Each guest instance runs on its own subnet. So knowing the IP address alone tells us nothing, but knowing the network hops from one instance to another could be revealing.

If each hypervisor acts as a gateway, then packets sent between two co-resident instances will always need exactly two hops. Packets sent between instances that are not co-resident will need more hops.

We can conclude that ip-10-87-117-148 and ip-10-87-69-9 are co-resident.

We can conclude that ip-10-91-9-230 and ip-10-87-117-148 are not co-resident.

Note that this technique may not be foolproof. If Amazon believes that this information compromises security, they could probably reconfigure their hypervisors so that new instances would hop through a random number of extra gateways. This would add some latency to the network, but more importantly would keep the guest instance from using this technique to determine whether or not it is co-resident with some other instance. That would be a shame for those of us using AWS for highly-available cluster applications... unless AWS also added a supported technique for avoiding co-residence.

Read and Post Comments
 

Init blogofile

November 04, 2011 at 04:00 PM | categories: housekeeping | View Comments

Yesterday I moved my WordPress comments to disqus, and today blakeley.com moves to Blogofile. Please let me know if you run into any problems.

Read and Post Comments
 

Before you upgrade to 5.0-1

November 03, 2011 at 08:47 AM | categories: MarkLogic | View Comments

Thinking about upgrading to MarkLogic Server 5.0-1?

As usual, back up everything. I haven't seen any data loss myself, but it is your data so be careful.

If you have made any changes to Docs (port 8000) or App Services (8002), the app-services portion of the upgrade won't happen (but the rest of the server will be fine). If you want to use the new monitoring services, you want that part of the upgrade to happen.

The fix is to revert your changes to ports 8000 and 8002. If you have repurposed either port for cq, you may want to go into cq and export all any *local* sessions before changing anything. Local sessions in cq are tied to local browser storage, which is tied to host and port, so you will lose access to them if you change the cq port. Not many folks seem to use cq's local sessions, but I thought I'd mention it. Whether you use cq on those ports or not, make sure port 8000 has root Docs/ and 8002 has root Apps/ or Apps/appbuilder/ - you can see these checks in Admin/lib/upgrade.xqy, function check-prereqs-50.

If upgrade.xqy decides not to upgrade your App Services configuration, it will log a message "Skipping appservices upgrades, prerequisites not met." at level "error". The rest of the server will function correctly, but you won't get the appservices part of 5.0.

Read and Post Comments