One of the most common mistakes I’ve seen businesses make over the years is to lose focus on what made them successful in the first place.

Over the last year or so I’ve become more disappointed with Plaxo. They seem to have forgotten that their key differentiation in the market was the way that they helped you keep your address book (and calendar) up to date, and secondarily to keep multiple services in synch.

To me they seem to be chasing social networking at the expense of the things that they already were really good at. Perhaps part of this is because they got bought by Comcast, but losing focus is never a good thing. They gave the site a face lift a while back and added a whole social networking thing with the Pulse bit, which seems to be modeled after some other social networking sites.

The thing that drew me to Plaxo (almost ten years ago now) was that it solved a huge problem for me: keeping my address book up to date. Before Plaxo, I’d SPAM my address book about once a year to see if I got any bounces, and then go through the bounces one by one to update them. This ended up being a lot of work, and had no guarantee of making sure that I had up to date information for anybody. Also there’s the problem that when people change their email address, it doesn’t always bounce, so I could be sending email to a dead account. Lots of companies leave old email addresses open and/or don’t send bounce messages for invalid addresses, so no reply doesn’t always mean what you think it might. And if you ask for a response, not everybody will anyway.

The other problem before Plaxo (B.P.) was that my address book was never very reliable. Sometimes I would get an email, and save it to my address book, but if I didn’t have a business card or some way to gather other information about them, that would be the only information in the address book. So six months later when they moved to the new company, I had no email address or way to find them.

So while I was still working at Quovera, Praveen Shah pointed out Plaxo to us as a cool thing. I fell in love instantly. Not only did it give me a backup of all of my contact and calendar data, it offered to automate my getting more accurate data. A few clicks, and Plaxo sent out an email that gave each person in my address book (who didn’t belong to Plaxo) a personalized message from me with their address information, asking them if everything was up-to-date (and of course inviting them to join Plaxo). If the data was good, they simply clicked a button and my address book was updated to say it was valid. If they had changes, they could enter them in the form that was emailed, and Plaxo would automatically take that data and put it into my address book. Best of all, it was free, and they promised to keep your contact data private.

There was also the exciting possibility that if everybody you knew joined Plaxo, you’d never need to ask for an update again, because Plaxo would automatically flow information changes between Plaxo members in your address book. For that alone, I paid the premium support price because I wanted to see them succeed.

And the other bit that was extremely well done was the synchronization between clients. If you used multiple machines, it was really easy to keep them in synch and for the most part it didn’t seem to have the habit that some other synchronization software at the time did of duplicating everything over and over.

At some point they got a reputation from some people as being a spammer, I think mostly because during the install it was easy to have Plaxo send an email to everybody in your address book even if you didn’t mean to.  I did this a couple of times myself and ended up sending Plaxo requests to people like John Chambers (who of course I don’t really have any reason to email directly). I suspect mistakes like this caused the spammer reputation because you’d get asked about the email, and it was easier to blame Plaxo than to admit that you forgot to uncheck John Chambers when you asked for updates.

Anyway back to the point of this story, with their new social networking focus, they no longer have any way to automatically keep address information up to date for people who are not Plaxo members. In fact the only way you can ask somebody for an update to their information is to invite them to join your Pulse (or the old fashioned email approach). So that works for the people who join and don’t mind having yet another social network to think about, but I’m back to square one for people who won’t join Plaxo (often because of the spammer reputation).

It still gives me synchronization between my different computers, and a few of my online address books, but it’s no longer as powerful as before. I’ll probably still use it if I were in the situation I’ve been in before where I needed to keep my address book and calendar in synch at the client site with my home address book and calendar. But now I need to find a solution for the larger part of my address book updating that drove me to Plaxo to begin with.

So don’t be surprised to get spammed by me with an email that says “I’m updating my address book, and this is what I have for you, please update …”

As to Plaxo – I saw this same sort of thing happen when I was at Excite. We basically were Google: had the best search engine on the planet, our home page was just a search box, and we were doing a better job with the technology than anybody else. But we were smaller than Yahoo (and Alta Vista), and we started to model our web site after a magazine (a lot of trying to match or beat Yahoo instead of focusing on our core competency). It’s my opinion that it was that very loss of focus that resulted in Excite being bought, and folded into one failing company after another.

Excite still exists, and the even still sport the LEP (Little Excite Person) logo, but between losing focus (and of course timing) they are no Google (in fact I wonder if they even do their own search any more).

I am hopeful that Plaxo will reinvent themselves and give me back the functionality that drew me to them, because if they don’t I fear they are destined follow Excite‘s example: they’ll become an also-ran in the social networking space instead of the stellar provider of a technology that can make life better for anybody who uses it.

Enhanced by Zemanta

I ran into an odd problem with the way Cake is coded that tripped me up for a couple of days. Because I hate it when things don’t work the way I think they should, I spent way more time debugging this than anybody should.

I got my basic RESTful service working for the VolunteerCake project, and everything was working swimmingly, until I needed to turn on debug to figure something out …

When I had the debug level set to less than two (2) calling the action I was interested in with an extension of “.xml” was working fine. I got back the XML representation of the data in the action I was interested in returned with content-type of “application/xml”. In Cake, if you turn debug to 2 (or 3) it will dump out the SQL that was run in an HTML table.

The problem is that this HTML table is actually spit out  after the rest of the view, meaning that my RESTful service no longer has a well formed document to display. Additionally (for reasons I’ve yet to isolate), when this happens, the document is returned with a content-type of “text/html” instead of “application/xml” as expected. Neither of these things would be acceptable if the site is to provide web services, since it would mean the web services would be broken as soon as somebody needed to debug.

The workaround for this is to manually reset the debug level when the extension of “xml” is detected. Since the debug data is useful, and it’s just the SQL that appears to break the XML, I asked on the IRC channel what the last place I could set the debug might be. The suggestion was to put it either in the afterFilter, or the end of the view itself.

I found that if I put the following code into the beforeFilter method, I could prevent the problem with the price of losing my debug output:

That same code placed in the afterFilter method gave me the debug output in a well formed XML document (excluding the SQL table), as did placing it in the view itself. This leads me to believe that when debug > 1 there is some code that happens after the beforeFilter that is not setting the content type to “application/xml” as would be expected from our routing rules.

Being the bulldog that I am, I dug into the Cake source code to see if I could figure this out. I found the spot where the SQL table was being built, which turned out to be in the showLog() method of the dbo_source.php, which is called by the close() method. Since the close() is called after the view is finished, and the showLog() method simply prints the data, that explains why it breaks the XML. It definitely breaks the MVC encapsulation, since the data gets dumped into an HTML table and spit out after the view is complete.

On the IRC channel, it was suggested that I try creating a data source that would override the showLog method and spit that table out to a table, which might be worth trying.

I posted my question on the CakePHP Google Group and got the useful suggestion to use the FirePHP plugin which basically writes the log data to the FirePHP plugin so it can be displayed in FireBug. So my approach will be to write a dbo_mysql_firephp.php class that does just that. This will at least resolve the MVC encapsulation issue and keep my view relatively clean.

I still want to figure out exactly why the content-type isn’t getting set properly, but for now I have a workaround that I’ll use, and I’ll add the FirePHP debugging to solve the well-formed XML issue if I ever do figure out the content-type problem.

Off to set up my FirePHP plugin and build the dbo class now …

Enhanced by Zemanta

I’m on a quest to make my application provide RESTful web services. After much digging, I found a post by Chris Hartjes at http://www.littlehart.net/atthekeyboard/2007/03/13/how-easy-are-web-services-in-cakephp-12-really-easy/ that helped a lot.

Turns out that Cake has some really nifty built in support that can be turned on really easily. For basic XML support, all you need to do is to add a couple of lines to your routes.php file to allow Cake to handle XML. This is pretty well described in the Cookbook at http://book.cakephp.org/view/477/The-Simple-Setup

So for my VolunteerCake project I added the following lines to my routes.php:

The mapResources() does the magic that maps the REST requests to the actions in the controllers, and the parseExtensions() sets up Cake to do some other routing magic when the request has a “.xml” extension.

So now if I call any of my actions and append “.xml”, Cake changes the response type and view to return XML. Next we need to add the view for the XML, which goes in the xml directory under the view we are REST enabling (e.g.- for jobs, we have a views/jobs/xml directory where the view CTP files need to be placed).

First I created the xml directory under the views/jobs folder, and next I created an index.ctp. This is a very simple file, which Cake’s XML helper to spit out the data with the following code:

Now to get the XML to display, all I have to do is create the appropriate views for my REST actions.

So for example if I go to the app/jobs action, I would normally see the XHTML representation of the page like:

VolunteerCake Jobs

Then if I append “.xml” to that same URL, I get the XML back as shown in the following screen shot:

Screen Shot 2013-06-16 at 10.51.23 AM

Next we need to do the view.ctp to provide support for sending back the data for a specific job by ID. This is practically identical to the index.ctp, except we’ve modified the code to use the variable $job instead of $jobs (since that’s what Cake returns):

This allows us to get the behavior of being able to get the XHTML for a specific job by using a url like /jobs/view/1 as shown:

Screen Shot 2013-06-16 at 10.53.03 AM

 

Then by appending “.xml” to that same URL, we get the XML for the job with ID of 1:

Screen Shot 2013-06-16 at 10.55.35 AM

You may notice that the XML for this Job has a lot more data than we saw when we got the list for the specific Job. The XML from /jobs.xml is only one level deep, while the data from /jobs/view/1.xml has a hierarchy of a job having slots, which in turn has a job, user_slot and user.

That happened because the index action was set up to only get the data from the Jobs, while the view action had recursion set in order to gather all the related data. By setting the recursive var to 0 (zero) in the index action, we get no children, while in the view action we set the value to 2 (two) which tells cake to fetch all the HABTM data (see: http://book.cakephp.org/view/439/recursive for more on this). Alternatively we could do a specific find and modify which bits of data we populate in the controller to determine what data gets spit out in the XML (this would alleviate the one potential downside to this approach which is that ALL of the data fields and tables are currently being placed out in the XML stream).

The basic point here is that we now have a working RESTful service (at least as far as fetching data) that doesn’t require a great deal of specific view code.

Next: completing the RESTful CRUD

Enhanced by Zemanta

I was sitting in an interesting presentation tonight that was about managing your career called “8 Essential Levers for Job (Search) Success” by Chani Pangali, and as part of his talk he mentioned the pardigm shift that is going on with how careers need to be managed.As we moved from small villages to an industrial society, we evolved from a barter economy, where you traded what you do for what you need, to a market economy that was based on doing work that supported the industry.  To me it seems that this resulted in huge shift where many relationships were replaced by intermediaries.

Way back when I was first working at Excite in the heady days of the early web, we used to talk about how the web was going to result in disintermediation (removing the need for intermediaries between businesses). Interestingly enough, that really didn’t happen, rather we saw an increase in intermediaries with all sorts of new web ventures springing to life and placing themselves in the middle of the supply chain by adding value to the transaction. That’s not to say they didn’t change businesses, they just didn’t change the paradigm: witness eBay connecting buyers and sellers, changing the business and creating a new way to sell your goods. But while the business was new, the paradigm was still placing trust in the intermediary.

The web has helped drive a shift in this paradigm with phenomena like blogs and social networking sites. By giving us new ways to network and connect, we are finding once again that the relationship is king. Similar to the way eBay connected buyers and sellers, these electronic interactions connect people by allowing them to find common interests and fill needs that would have been far too costly in the past. I can write this web post, and somebody who I would never have met may find meaning in my words and benefit from them in a way that would not have occurred before. In addition, because blogs are two way conversations, I might be introduced to an opportunity that could change my life by somebody who has read my blog.

This scope of this change is similar to what happened with the beginning of distributed newspapers (and before that the printing press). The press allowed an idea to be readily shared to a more distributed audience, and the distribution allowed that audience to become even larger. With the web, the cost factor is essentially removed from the distribution, so that same idea is accessible to the entire world (and the barrier to two-way communication is effectively removed too).

The paradigm shift which seems to be going on is also related to the competition and change in the market. While our parents may have been able to find a company that they would commit their work to, and in turn receive some assurance of stability and a partner in their professional development, the global economy no longer supports this sort of relationship. Companies have found they can no longer afford to commit or invest in their employees the way they used to, and have (in general) placed the responsibility firmly on the worker.

I have a new open source project at VolunteerCake that is using their recently released web hosting service. This service includes the typical LAMP stack with MySQL, Apache and PHP, so I thought it would be a great place to keep a demo of the site running.

It was working fine, and then one day I noticed that the pages were being over aggressively cached. For instance, if I clicked the login button on the front page, and logged in successfully, I expected to see a “logout” button and my user name, but instead was seeing the original page. By hitting “shift-refresh”, I was able to get the right page to display, but obviously that wasn’t a good way to demonstrate the software.

 

During my work on figuring out my Plaxo problem, I found a really cool tool called Fiddler2 that acts as a web proxy and lets you do nifty things like see the headers on web requests. Using this tool, I was able to look at the cache headers being sent by the server which looked like:

HTTP/1.1 200 OK
Server: nginx/0.6.31
Date: Tue, 18 Nov 2008 22:02:49 GMT
Content-Type: text/html
Connection: keep-alive
X-Powered-By: PHP/5.2.6
Set-Cookie: CAKEPHP=b7pvoorvj11tb45micnfqhc4b2; path
P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM"
Cache-Control: max-age=172800
Expires: Thu, 20 Nov 2008 22:02:46 GMT

Content-Length: 444

The part marked in red was the problem, the Cache-Control and Expires headers were being set to 48 hours in the future for my pages, so the browser was displaying the cached version of the page instead of asking the server for a new copy.

Knowing this, I opened a case with the SF.net support team to see if they could help figure out why the server was setting these headers for the PHP pages. I had a suspicion it had to do with the fact that Cake uses a new file extension of “.ctp” for the view files, but I really had no proof of this.

The SourceForge.net guys told me that their service had just been moved to some new servers, so it was possible this was related to that. They suggested that my application was responsible for setting the cache headers. While Cake does do some caching, it didn’t fit with what I knew. This exact same setup was working on my hosting service at http://volunteer.lctd.org/, which didn’t send those same headers.

I did some research on the Apache settings for cache, and while it is generally something you do at the server level, I found that it is possible to override these settings in the .htaccess file for a particular directory. Having had to tweak this file before to get Cake to work properly, my .htaccess file looked something like:

So what I needed to do was to tell the server not to set the Cache-control or Expires headers. After some experiments, I ended up with a new .htaccess file that looked like:

Which basically turned off the whole caching on the http://volunteercake.sourceforge.net site. Since this is just a demo application, I figured that was good enough, so I didn’t spend any more time on figuring out how to restrict the change to a specific type of file (which would be important if this were a large application).

Enhanced by Zemanta
Posted in Web.

I spent some time yesterday figuring out CSS problems for Job Connections.

The Job Connections site was built using a CSS for printing that wasn’t including all of the parts of the page that should be printed. They use a stylesheet called print.css, and when somebody would try to print a page, they weren’t getting anything but the text in the middle of the page.

I took a look and found that the stylesheet was setting all of the region styles to “display: none”, which tells CSS not to display them. Editing the stylesheet to remove these bits was all that was needed, so I set it up to print everything but the menu bar at the top and down the side.

In the same file, there was a reference that looked like an attempt to make the links display as bolded when the page was printed. The code that was trying to do this looked like:

 

That wasn’t working, mostly because the style was being applied to all anchors. I updated it to look like:

This change applied the style to both links and visited links. I then went one step further and added some magic to get the actual link to print (works in CSS2 compliant browsers):

The magic is in the “:after” bit, which basically says “after you display the link, display something else”. With this applied, the links all get bolded, underlined, and are followed by the actual URL in parentheses afterward.

I got access to the web site (thanks to Walt Feigenson), so this is partially fixed now. It looks pretty good except the content still has quite a large area of whitespace to the left due to the way the style sheets are interacting. I’m playing with updating this now to make the print CSS work the way it should and not inherit the styles that cause this from the “screen” CSS.

Enhanced by Zemanta

A couple of days back, I solved a problem I was having with Plaxo. For a few weeks, I was unable to connect to any of the Plaxo web servers from any of my home machines.

Being a fairly knowledgeable network person, I spent hours trying to diagnose the problem. I could get to all other web sites, but not to anything in the plaxo.com domain. Worse, I could resolve, ping and traceroute looked fine.

First I thought it might be something caused by Plaxo being bought by Comcast. Comcast had just recently been in the news for blocking traffic to keep bandwidth available, so I figured it wasn’t inconceivable that somebody made a mistake in a firewall somewhere that was blocking traffic between them and AT&T.

I sent an email to Plaxo to ask them if their site was up, and called AT&T to see if we could diagnose the problem. AT&T as usual was very nice (and annoying) and started me out with the normal insane steps:

  1. Turn off your firewall
  2. Clear your cache
  3. Turn off your router

After getting past all the annoying stuff, I got to their level 2 support, and then to the 2Wire support to see if they could find anything with my router that might be causing this. Naturally they found nothing, and everything looked OK.

So I escalated with Plaxo, calling them on the phone to see if there was anything they could do. There were emails and phone calls back in forth that never solved the problem:

  • First call I was told that there was a problem with one of their servers, and that it would be working the next day (not).
  • Another call I was told they had found the problem in their web server, and it would be fixed shortly
  • I got numerous emails telling me to uninstall the Plaxo software and log in again, which of course didn’t work since I couldn’t even get to the web site.
  • I had numerous emails diagnosing the problem as a Mac issue, or a PC issue, which again it wasn’t since it was happening on the Mac, iPhone and PC (and the iPhone doesn’t even have a Plaxo client).

Finally at some point, I got a support guy who told me that my IP address was indeed blocked at their server. Now we’re getting somewhere. But no, it still doesn’t work.

Luckily for me this guy is good, so he tells me that there was an old version of the Plaxo client for Mac that their servers were detecting as a bot attack, so if I uninstall that everything should be golden. I do, and lo and behold I can get to Plaxo again …

So it appears that Plaxo can be incompatible with itself …

I wonder how many people are blocked with the same problem right now.

Recently I’ve entered the world of using the web for self marketing.

I saw a very interesting talk by Walter Feigenson at the last CPC Job Connections meeting about marketing yourself using the web.

I already had a LinkedIn profile, and had my resume on a couple different places, but his talk convinced me that I ought to do some more. So I did the following:

  1. Set up Google reader so I can see all the web changes in one place.
  2. Built a profile on Naymz (http://www.naymz.com), unclear on exactly what this one does.
  3. Ziki (http://www.ziki.com) – Signed up, but never got the validation email. This is supposed to be a job finding service.
  4. Spokeo (http://www.spokeo.com) – Signed up – not clear on what this site does beyond search for names.
  5. Ziggs (http://www.ziggs.com) – Signed up and built profile, this one looks interesting.

Just signing up for these things takes time, getting them to be consistent seems like it will be a pain. It reminds me of posting your resume to all of the job search sites. Not too bad the first time, but then going back to update is going to be hard.

Next thing I did was to add cross links from as many different places as I could to my web site (http://www.accuweaver.com). This is supposed to help with the ranking on the search engines, since the search engines use the assumption that if a lot of sites link to you, you must be important.

I also cleaned up my LinkedIn profile, added links, and added my company to the Companies part of LinkedIn.Then after all of this, I got hit again with the suggestion that I should set up a Facebook profile. Walt had mentioned it, but it took hearing it a few more times for me to act.  It still seems a bit smarmy, and unlikely to be useful as a business networking tool, but we’ll see.

Next: Making sure I’m posted on a huge list of sites I got from Valerie Colber