Image representing Windows Live Mesh as depict...

Image via CrunchBase

I’ve been playing with the beta of Live Mesh from Microsoft for some time now, and find it a very useful technology. So far the only problem I’ve run into has been some bug that was introduced when I upgraded to Snow Leopard.

For some reason, after restarting or hibernating my machine, Live Mesh gets left in an odd state that leaves it unable to connect to the mesh, leaving it in a weird state where the login action is greyed out:

Live Mesh greyed out login

After a bit of Googling and searching around on the Microsoft Connect site for people experiencing this bug, I found a couple of different solutions.

Two possible workarounds, both require Live Mesh to be shut down:

Method 1: delete the Live Mesh preferences file ~/Library/Preferences/com.microsoft.LiveMesh.plist.

Live Mesh preferences

This method is what I typically use, since it is the least intrusive. It reconnects all the folders that I’ve added to my mesh, and re-establishes the synchronization. It does tend to fill up my hard drive with files, since the initial synch puts most (if not all) of the files in the folders into the Trash.

Method 2: Star with a clean slate:

  1. Quit the Live Mesh client.
  2. Delete the Live Mesh settings in Application Support (~LibraryApplication SupportLive Mesh).
  3. Delete the Live Mesh preference (~LibraryPreferencescom.microsoft.LiveMesh.plist).
  4. Launch Live Mesh client.
  5. Log in and select the folders you want to synch like you did originally.

This method is effectively like doing a complete uninstall, since it removes all the settings and preferences. It does cause a complete re-synch of the folders, and you can choose if you want to “merge” or “replace” based on whether you think you might need to or not.

This will also end up with lots of files in the Trash, so watch out for your disk filling up.

Method 3: Never shut down or let you machine sleep ;-)

Obviously, this method isn’t practical, but I figure I’d mention it. Until Microsoft adds some code to the Mac client it is probably worth trying to remember to shut down the Live Mesh client before you reboot or leave you machine in a state where it loses it’s connection with the network.

My guess is that the Microsoft developers aren’t listening for the right events, and therefore leaving things in a state where they don’t know how to recover. Most Mac apps are pretty smart about knowing when the machine is going to shut down, or when the network connection goes away, and handle the problem as gracefully as possible.

Live Mesh is still in beta, so it’s likely they will fix this before it becomes a real product. Like most Microsoft beta products, Live Mesh is still incredibly useful and solid on Windows. I’m hopeful it will get there on Snow Leopard as well.

I love Google Voice. It’s an inspired system that gives me a permanent number that I use as the way to get in touch with me.

It lets me have calls ring at multiple numbers, deal with voice mail as part of my normal email, and gives me some nice attempt at transcription that is sometimes useful.

Usually I can figure out what the caller was saying from the weird transcription note that I get, but occasionally I get one like today’s gem. The caller said “Call me back and I’ll fill you in”, and Google Voice gave me: “15 minutes and I’ll kill you” ….

Of course both of those would get me to call back, but I think they need a little more work to get this right.

…Or, how to  reduce email without leaving the group…

I work with a job search group called Job Connections (http://www.jobconnections.org) which connects to members with a Yahoo group. It’s a moderated group whose membership is generally restricted to people who have actually attended a Job Connections meeting.

It’s a pretty busy group, so there are a lot of emails that get sent out (mostly about job postings that somebody received and is not interested in pursuing). As a result, the most frequently asked question to the group is: “How do I reduce the amount of email I receive from the group without leaving the group?”

Fortunately, Yahoo groups have preference settings that you can use to control the level of email you get sent.

The basic options are:

  1. Individual Email – Receive every message posted to the group
  2. Daily Digest – Receive a summary of up to 25 messages in a daily email.
  3. Special notices – Receive only messages from the moderator
  4. Web only – no email ever, you have to log into the group to see the messages.

The easiest way to edit the preferences is to go to http://groups.yahoo.com/mygroups?edit=1 and change the settings. If you are not yet logged into your Yahoo account, you’ll get the login screen:

Yahoo login

After logging in, you’ll see the group edit page that will look something like this:

Yahoo groups edit

Just click the “Message Delivery” drop down to change to one of the options described earlier.

Alternatively, you can log into the group and click on the “Edit membership” link at the upper left corner of the page and change things there.

Yahoo group edit membership

Clicking that link takes you to the membership preferences page for that group where you can change a number of things, including the email preferences:

Yahoo group membership prefs

Finally, if you want to control your email options via email, Yahoo also provides a way to do that. Basically there are email aliases that let you set a number of your preferences simply by sending an email to them (more fully described in Yahoo help at  http://help.yahoo.com/l/us/yahoo/groups/original/members/email/email-01.html):

Please Note: The email addresses above are specific to Job Connections, to do the same for a different group, replace “cpc_job_connections” in the above commands with the actual name of the group (e.g., pastry_chefs-subscribe@yahoogroups.com)

Comcast logo I recently switched from DSL (which I’d had since it first was invented) to Comcast Cable for my Internet connection (and TV and phone). By doing so I saved about a hundred bucks a month over AT&T and DirecTV. Of course as soon as I switched, AT&T started calling me with a bundle that was roughly the same price, but that’s a different story.

One of the things that happened a while back was that Plaxo was bought by Comcast. I have always been a premium Plaxo user, feeling that I wanted to support them since I find the product so incredibly useful. What I learned was that if you are a Comcast subscriber, you are automatically a Plaxo premium user.

Now, being a premium subscriber used to only mean you got VIP support and access to a couple of tools (like the address and calendar deduplication tool). But now Plaxo has announced that the Outlook synch is a premium member only tool. While I worry that this decreases the value of the service (since there will be fewer reasons for people to sign up, therefore fewer members, and decreasing the number of automatic updates I get), what is interesting is that every Comcast subscriber gets access to these premium services.

To activate this, first you have to make sure that you are signed up for Plaxo through your Comcast email account. First, log in to your Comcast email by going to http://www.comcast.net and clicking on the Email link in the “My Comcast” portlet:

My Comcast

If you’re logged in already it will go straight to your email, otherwise you’ll get the login screen, where you need to log in:

comcast login

Log in with your Comcast email address. This will be something like your last name and street address unless you’ve changed it. Once you have logged in, you’ll be at the Comcast email screen, which uses the Zimbra email client. From the tabs, you’ll want to choose the address book:

Comcast email tabs

The first time you go to the address book, you’ll be asked to build your address book:

comcast build address book

If you click on the “build your address book”, you’ll go to an initial Plaxo setup screen. Since they already have some of your information (name and email), they don’t have to ask you for anything but where you want to populate your address book from:

plaxo uab

So now it gets interesting. If you click on Plaxo, you can link an existing Plaxo account to your Comcast email. If you were already a Plaxo user, this will get your current address book and calendar.

Plaxo Link account

If you’re not already a Plaxo subscriber, you can choose one of the other options to build your address book by logging you in and pulling the address book from there:

Plaxo UAB Gmail

Note that the GMail synch only works for accounts ending in “gmail.com“,  and not GMail accounts that are using Google Apps. I suspect that Yahoo accounts would also be restricted to “yahoo.com“, but I don’t know that for sure.

There’s a shortcut to signup to Plaxo immediately by simply going to http://www.plaxo.com/ftue/activateComcast, clicking the Activate button will get you set up:

Plaxo activate

This one does require you to fill in your name and basic information (or link to your existing Plaxo account by following the link at the bottom right). Either way, once you have the account linked, you are signed up and active as a premium member. Now not only can you set up synch points, but you can also install the Outlook synch tool on any computer you use.

Along the way the steps will ask you to update your address book, and if you want to invite your friends. I always skip that step, since I send my friends enough email already.

At the end, you can validate that you’re a premium member by clicking on “Settings” at the top right of the screen, and then choosing “Premium” from the list at the left:

Plaxo Premium

This shows my account has premium status.

If you use Outlook, there’s a few more steps to get fully set up with the Outlook synch tool. There are multiple ways to get there, but ultimately you want to download the sync tool from http://www.plaxo.com/people/tools?src=tools

plaxo premium tools

Note that you have access to all of these tools, some of which are very cool (like being able to roll back your address book). If you aren’t a Plaxo premium subscriber, you can download and install the tool, but you won’t be able to use it, since the synch verifies the account status when you run it for the first time.

Plaxo has a nice walkthrough of the install process here: http://www.plaxo.com/downloads/outlook?src=pulse_tools_outlook〈=en, so I won’t duplicate that. One thing that I did learn the last time I did this for somebody is that you have to install it with an account that has admin priviledges. The install won’t fail, but you just won’t get the Plaxo tool bar in Outlook.

Once the install completes, and you start Outlook up, it will walk you through a wizard that will sync your Outlook and Plaxo address books. From then on, you should see the Plaxo tool bar at the top of your Outlook screen:

Plaxo bar in outlook

There are lots of other neat things about Plaxo, not the least of which is that you can synch between multiple machines. There’s a version of Plaxo for the Mac, and it seems to do a fair job of interacting with the built-in Mac synch tools (including MobileMe).

I’d definitely recommend you take advantage of this “free” service if you are a Comcast internet subscriber.

GTUG CampoutI recently attended the Google Technology User Group Campout at the Googleplex in Mountain View. This was a three day sprint to build something interesting with the latest Google product: Google Wave.

Google Wave, as it turns out is a very interesting experiment in social interaction. Google is trying to reinvent collaborative communication with a piece of software that is one part chat, one part Wiki, and one part WebEx.

I’d seen this product at the Google I/O conference a few months back and was impressed with the demos. Basically you get these shared documents (called Waves) that all of the collaborators can update at the same time. You can watch the hour and a half demo at http://www.youtube.com/watch?v=v_UyVmITiYQ

The demo included things like interaction with blogs, Twitter and other web technologies, as well as interesting programming doing things like on the fly grammar checking. I signed up for a sandbox account the day of the presentation (using my iPhone of course), and got set up a week or so after that.

Wave was written by the brothers Lars and Jans Rasmussen, who are the architects of the Google Maps API. In some sense, this is an experiment in building software caused by the lessons they learned with the immensely popular Maps API. By giving the developers access early in the build process, they hope to build a more solid platform that will serve the developers needs.

So Friday came and I drove over to Google with Bennett Fonacier (a friend I met through Job Connections some time back). After the 50+ people got through with their 5 minute pitches, we networked for another hour forming teams. There were many ideas that were very similar, and for the most part these groups joined up into a combined team. Bennett and Steffen Frost (CEO of Carticipate) both came up with the idea of matching people for ride shares using the Wave.

I’d originally thought I would join a team doing something health related, but since my goal was to get a working piece of code, and I was sitting with the car pool team, I joined that effort. We became one of the roughly 50 projects teams, and quickly talked through what we’d be building over the next 48 hours or so.

The other members of the Wave Rides team were:

  1. Steffen Frost was a great concept guy, and had an existing product we were going to try and emulate.
  2. Bennett Fonacier has some development background, but he was short a computer, and would be doing QA.
  3. Andreas Koll who had some experience with the Google Maps API volunteered to build the Gadget for our interface.
  4. Hannie Fan offered to provide some design expertise and CSS coding.
  5. Robert Herriott was a quiet supporter, offering constructive criticism

I took on the task of writing the Robot, which is the part of the Wave that would take the input from the Gadget and match the participants with ride partners. Andreas had a working Gadget in short order, and was able to embed it in a Wave.

While he was doing that, I was working on getting a Robot built using the guidelines in developing Wave extensions slides. I got a working “Hello World”, built the extension.xml file, and with help from the Google crew, got it so we could create a new Wave with my Robot added.
Carticipate Logo

I got the icon from the Carticipate site, added a bit of code, and the Robot was adding the Gadget to the Wave. So far I had gotten a working Robot, and Andreas had a working Gadget. Now all we needed to do was clean them up a bit and get them talking.

This turned out to be a bit tougher than expected. The current state of the world is that the Robot can add a Gadget, and send data to it when it is added to the Wave, but can only read the state from the Gadget, and not actually set anything after the Gadget is running.

Anyway, I eventually got some debug code going in my Robot that would dump out the properties of the Gadget, which helped Andreas to debug some issues he was seeing with the state of the users accessing the Gadget.

A Gadget is basically a snippet of HTML and JavaScript that gets embedded in an XML file for inclusion in the Wave. Because the working code is inside an XML document, it gets wrapped in a CDATA element, which makes editing and debugging the Gadget a bit challenging. Andreas approach was to cut the HTML code out of the Gadget and edit it as an HTML document, then paste it back into the Gadget. Not ideal, but it works.

Our original plans for the WaveRides robot was that it would behave roughly the same way as the Carticipate application does: ask the user a few questions about where they are going, if they are driving, and then show a list of everybody who is travelling in the same area and time. So as we worked, I kept prototyping closer and closer to that goal.

By the late Saturday night, we had a working prototype that launched the map gadget, and displayed back the data from the users interacting with the gadget. The gadget was displaying the location of all of the users on the map, and we were feeling pretty good about the progress (especially considering none of us had ever built anything with the Wave API before). Bennett and I headed home, expecting to finish up the next morning, leaving Andreas coding away on his gadget.

The next day we arrived at the Googleplex and found that Andreas had solved some of his remaining problems, and the gadget was looking good. I went to work on the Robot, trying to get it to match up the user data. Of course, since there was little time left, the Wave kept misbehaving (probably due to all of us pounding on the sandbox with untested code), and we kept running into walls.

My original design had been to add a blip with the map gadget and gather my data from there. I soon realized that it was difficult to keep track of the gadget that way, so I changed my code to add the gadget to the root blip, and started removing debug code. At some point, we decided to put the code up on code.google.com for safekeeping, so I spent a few minutes figuring out how to do that (you can see the code at http://waverider.googlecode.com).

It was still fairly early on Sunday morning, and Andreas had been up until the wee hours of the night, so he wasn’t around for us to ask him to make changes to his gadget. We had separated the development of the gadget and the robot, so they were actually being served by two separate app server applications. The gadget only provided input for one point, and to complete the robot to the point we could demo something interesting, we needed it to have a “from” and “to” for each participant.

So rather than reinventing what Andreas had done, I decided to change the robot to create a “from” and a “to” gadget in the Wave, and use that. Interestingly this turned out to be fairly painless. I was able to add the second instance of the gadget, and give them each a name. The Wave kept track of them separately, so I got the data from both separately.

I spent the last few moments before we were supposed to present, trying to get a simple match working. The nice thing about this was that I could version the app on the Google App engine, and keep a known working version deployed while continuing to test. As other teams presented, it became obvious that this had been a good decision, and I eventually dropped back to one of the earlier working versions for the demo.

We got to demo the concept, and explain what we would have liked to have done. I accomplished my goal of learning how to code a basic robot, and learned a lot about the API. We were by no means the slickest or coolest app there, but we had fun building it.

We’ve got the start to an open source project that could eventually be used to match people locationally, and used for all sorts of purposes, and we got to see some of the challenges in building apps for a piece of software as new as Google Wave.

Team picture

From left to right above: Steffan Frost, Fannie Han, Rob Weaver, Andreas Koll, Bennett Fonacier.

Steffen created a really cool video over that weekend as well that you can watch at http://www.youtube.com/watch?v=DkmuBmBZkBo

Since last week, I’ve been immersed in coding and development education about various cloud applications.
Google Wave

First there were a couple of meetups about the Google Wave product that gave me a overview of some of the capabilities and requirements for developing applications around the Wave product. Google Wave is an interesting piece of social media that is a bit like chat and MediaWiki combined with WebEx.

The first talk on Monday, was about the federation server, which is the open source implementation of Google Wave. The idea is that you could have a Wave server inside your firewall that could protect your data, while also allowing for communication and interaction with other federated servers. The code is so new, that it is actually using a different protocol than the Google Wave servers are using.

This is a very early prototype, but the idea is that it will use standard XMPP servers to communicate between domains, and use typical certificate based trust mechanisms to authenticate between domains. The internal server could be implemented with rules to (for example) prevent patient data from being sent outside of the firewall in a conversation between a medical team and a provider at another institution.

The next talk on Wednesday was about writing extensions for Google Wave. These extensions are UI widgets (called Gadgets), and Robots, which add capability to Google Wave.

A Gadget is basically an HTML and JavaScript snippet that does something useful when added to a Wave. A Robot is a bit of code that interacts with the Wave as if it were one of the collaborators in the wave. The Robot can add participants, Gadgets and edit the contents of the Wave.

As an example, you could have a voting Gadget, that allows the collaborators to vote. A Robot could add the Gadget to the Wave,  tally the results, and write them out to a database.

A Robot can also do interesting things like watch the wave for keywords and make changes or respond. Some of the examples are a grammar checker that corrects grammar as you type, a code formatting and highlighting robot, and the classic Eliza conversational robot.

Next was the weekend long GTUG Campout at the Googleplex. This was a heads down coding adventure where the idea was to get a workable Google application up and running in 48 hours. I signed up for the campout a while back, with the intent of learning how to work with Google Wave.

I had signed up for a Wave sandbox account when it was first announced at the Google I/O conference, so I was able to play with it a bit, but hadn’t really had time to get started with developing anything. After the talk on Wednesday, I had a pretty good overview of how to get set up, so at least I had all the bits installed to parcipate.

So Friday came and I returned to Google once again. The idea was that we form teams to develop applications using the Wave extensions, so the first task was to come up with ideas and pitch them. After the 50+ people got through with their 5 minute pitches, we networked for another hour forming teams. There were many ideas that were very similar, and for the most part these groups joined up into a combined team.

After the teams were formed, the Google team gave another talk about developing Wave extensions, which was a great review and contained some things that aren’t really documented elsewhere (since the API is still changing). The slides from that talk became my guide to building my first robot, an experience that I’ll talk about in another post about the GTUG Campout 2009.

I’ve been using some of the more interesting “cloud” applications recently: Google Apps, Live Mesh and a few others.

I’m really impressed with the capablities and use of these free web applications. It’s a really interesting marketing tool as well: give away the low end product to build user acceptance, and then add a bit more to give value to the enterprise.

My first foray into the personal cloud was Google docs. This product has to be the coolest idea ever: create your documents on a web site, and let them be shared and simultaneously editable. The concept is awesome, and works really well for some documents (most notably spreadsheets). I can share a spreadsheet with any number of people, and they can all edit it at the same time.

Sort of like Netmeeting on steroids, I open my spreadsheet and there’s a little notification that somebody else is editing or viewing it. As they make changes, I see them in real time, and they see any changes I am making. Now the interface is not quite as friendly as Excel, but for most of the spreadsheet light users like myself, it’s more than adequate.

This is supposed to work for documents as well, but I’ve had less success with them (changes seem to get overwritten if more than one person updates at a time).

The other beauty of this is it effectively gives you a network storage for all of your documents, solving the problem of how to keep them safe and secure. I no longer have to worry (as much) about backing up my hard drive, since I know Google is taking care of the hardware. If a drive crashes there, they are ready with a failover, and I never even know that it was lost.

After using docs for a while, I also started playing with the other apps and found them all well thought out and useful. One of the main reasons that I had a Windows VM on my Mac was to support Outlook because of it’s tight Exchange integration, and ability to handle my calendar well. I combined Outlook with Plaxo to keep my various calendars and contacts in synch, and was very happy with this.

The bad thing about Outlook however is the way it stores its’ data: the dreaded PST file. They’re notoriously tempermental, extremely space wasteful, and difficult to back up. So I started trying other methods for dealing with email, including the built in mail client for Mac, and Entourage. None of these were as easy or as complete as Outlook.

Then I tried GMail‘s client. I’d had an account for years, but had never really tried the mail client. But as I thought things through, the benefits were clear: I get a huge amount of storage for my email, and I don’t have to worry about losing any history ever. I’ve lost years of email in a single PST or drive crash before.

At first I wasn’t convinced. The UI seemed cluttered, and I wasn’t a big fan of the way the conversations were threaded (in Outlook I used to categorize, and had lots of options for sorting folders just so). With GMail, everything is in a big pile, and you filter by tags. After a few weeks, another benefit became obvious: the fact that I could search for anything in my mail.

In Outlook, there was always a find feature, that if you could get it to work, took a very long time. Worse, it wasn’t possible to search across different mail accounts unless you added some search add-on. I had been using Google Desktop for this for some time, which worked well as long as the index had seen the message I was looking for (it only indexes message as they are opened, so when they get archived the search may find them, but you can’t get to them because it’s pointing to the wrong place).

With GMail, everything is indexed, no matter where it is. And interestingly, this also includes your instant messages, so if I remember I talked to Warren about something, I can search for it and GMail will find it in both my email and chat conversations with him. And when I look at a message, it shows me the whole thread of the conversation, with the bits that match the search expanded, making it easy to put the whole thing in context.

So now I’ve got free document storage, free email with more storage than I’ve ever used (a PST with 10 years of email had to be split because it was over a gigabyte in size, yet contained less than a hundred megabytes of data). I don’t have to manage my email any more than to tag it in ways that are useful to me (and I can tag it for multiple things, and there is still only one copy of the message to worry about, unlike with folders where you had to have two copies if you wanted to categorize things that way).

So how does Google monetize this? Well, it turns out they have an enterprise version that they sell for $50 per year per user. Compare that with the cost of hosting Exchange, and a file server, and you have a no brainer for most small enterprises. And even for the standard version, they let you use it for free for up to 50 users, so a SMB can get started for even less than the $50 per user.

Considering the Microsoft equivalent functionality would require the full Office suite, and Exchange server, and some collaboration server, you’d be looking at an outlay of a few hundred dollars per user. The clear win here is that you’ve now got a suite that works for the home user, and can also be used effectively by business users. Google wins on the marketing front, leveraging the lessons of open source to gain customer base and entry into the enterprise market.

Next: Live Mesh …

I was thinking about this as I drove to work this morning: what is the real business value to Oracle of buying Sun ?

It occurred to me that part of the many benefits to Oracle are the products that help them compete better with the Microsoft offerings. Could this be another in a long line of acquisitions by Larry Ellison in his quest to make Oracle a more successful company than Microsoft ?

Microsoft has owned this market for some time now, and has had some tools that Oracle has tried to compete with over the years. Microsoft had Access, which is at a surface level a database, but has over the years served much better as a front-end tool for database access. Oracle has tried to  address this over the years, first with Oracle Forms, then with JSF and ADF, and now APEX (formerly known as HtmlDb).

These tools, while extremely capable, have never had the low entry to use that has been available in the Microsoft product line, and now with the rapid introduction of Silverlight, Microsoft is threatening to dominate the RIA market.

There is tremendous buzz (hype?) in the market about the RIA competition, with both Adobe and Microsoft claiming a market penetration of over 70%. Sun has similar figures with Java, and has recently entered this market full force with JavaFX.

JFX combined with MySQL looked to have the potential for introduction of new products that would displace both the rich media and rich data driven applications that have been dominated by Flash (Flex).

With the acquisition of Sun by Oracle, it is entirely possible that a solid Flex and Silverlight competitor could emerge due to the capabilities of the Java platform for producing UI, combined with the simplification in coding provided by JavaFX. This could also give rise to an easy to use tool that could replace Access as the easiest way to build an application, by integrating the JavaFX UI capabilities with the Oracle developer tools.

The only missing piece in this puzzle for me is that focus on the end user as being capable. Oracle has great tools for developers, and they help build applications extremely easily, but they haven’t done a great job with figuring out how to bridge the gap between the technical types and the consumers. I don’t think it’s a vast chasm to cross, but they would need to focus on improving the ease of use to compete head to head with Microsoft and Adobe.

Not only does the Sun acquisition continue to strengthen the web tools that Oracle has recently improved with their WebLogic tools, expand their hold in the database market, and solidify their place in the SOA market, but it also allows them to compete better in the hottest area of competition at the moment: Rich Internet Applications.

What will Oracle do with these capabilities ? Only time will tell.