Cloud computing in Canada

April 6th, 2014

After a few amazing years of ignoring this blog, I’m back. I’m attending a conference about cloud computing in Canada called The Cloud Factory in Banff. What an amazing venue the Banff Centre is!

Just published an article about the future of “the cloud” and Canada. I expect big things to happen up here, it is just a matter of time. I wrote:

…cloud computing requires great connectivity, affordable power, cool air, and top engineering talent combined with world class data centers a safe, democratic country with a business-friendly legal system and close proximity to major markets.

And

Location matters. Storage consumption is increasing at about 50 percent per year, much faster than the capacity of networks. The speed of light is thus far an insurmountable limit. This leads to data gravity – with a significant amount of data is stored somewhere, more data and services tend to follow, gathering momentum and fostering an ecosystem. Big data is the new black gold, with the combination of Moore’s law, cheaper and faster storage and software such as Hadoop, business is opting to store all information now and analyze it later.

But in the brave new world exposed by Edward Snowden, that was enabled by similar trends, many businesses and countries are looking to avoid hosting their data in the USA and there has been a race to develop regional capabilities. Where data is located has big implications for performance, cost, legal compliance and importantly, perception. Why have we not seen similar investments in cloud platforms in Canada?

Read the rest here.

The (copy)fight against digital culture, and intellectual privelege

November 10th, 2008

It takes (sci-fi) author Cory Doctorow to put it sufficiently lucidly: the internet is designed  to efficiently and inexpensively copy information and has flourished as a result, therefore traditional copyright in the age of the web is a direct attack on the digital culture which has given rise the web. Due to this huge shift in constraints, the legal framework we had is maladapted to the medium. The balances are out of whack.

Take, for example, the concept of the “free rider,” one who benefits from but does not contribute to a common good. Tim Lee points out that the economics of these scenarios is vastly changed by the scale of the internet. This has big implications for intellectual property, as Mark Lemly discusses in his fascinating paper.

Which segues into some broader, and enormous, problems with intellectual property in the age of the web. Never have there been so many people with such high levels of education had access to so much information. Most good ideas occur to many simultaneously, the challenge tends to be execution (with some rare exceptions).  So I was recently intrigued by an alternative way of looking at IP: “Intellectual privelege.” Consider that for a moment. Any originality I have is predicated on some influences: education, peers, priveleged information…

More on this when I get another micro-sabbatical, but bottom line is, society needs to re-evaluate the incentives and rewards for intellectual productivity to ensure they don’t have the effect of stifling innovation and worse yet, benefiting a vanishingly tiny fraction of the population.

What a month – I have a new boss

October 1st, 2008

Ian Rae's Facebook profileSeptember was a great month. BitNorth was awesome, videos of content to come soon at Bitcurrent. Akoha had a great launch at TechCrunch and will hopefully inspire a new generation to “play it forward.” Syntenic hired a new Ops manager who brings some great unix and virtualization chops. 

September is also yielding excitement with general elections in North America, and has generated economic uncertainty with the collapse of Wall Street’s pyramid scheme. However, the BIG NEWS is that I “spawned a child process” (as a colleague likes to put it) – a beautiful daughter process. For those who don’t have access to my Facebook photo albums you can see some pics at her mother’s and my best friend’s blog: AlioFish.

Nothing I have done compares to the excitement and fulfillment of being a dad. It refocuses, inspires and is an intense source of joy. I thought as an entrepreneur I would be my own boss, but no longer now that my daughter is here. And for some reason I’m ok with that.

A cartoon guide to new Google web browser

September 1st, 2008

Well I was thinking Amazon’s SAN in the cloud was going to be the biggest web application news of 2008. But that just got trumped by Google’s new web browser, touted by many as an “operating system for the web.” Wow. Open source, heavily influenced by popular web technologies such as Mozilla Firefox and webkit, with a particular focus on improving javascript performance and browser security and stability. There is going to be a lot of information to sort through on this, but it certainly looks extremely promising. Check out the excellent cartoon guide!

Personal update

August 17th, 2008

Heri at Montreal Tech Watch broke the news that my web infrastructure services company Syntenic has a new (beta) webpage. I have no doubt that my amazing wife’s blog pulls in more visitors than I do, so I am hoping to reverse that trend with a slick new design!

I also eked out a Shakespeare-inspired article on cloud computing for BitCurrent, an Alistair Croll initiative to which I contribute sporadically but enthusiastically (witness the awesome graphics here).

Speaking of collaboration with Alistair, the made-in-Montreal BitNorth conference will soon be upon us, a unique group investigation of the intersections of technology, social issues, policy and – most fittingly given the amazing location of the event – music (can revelry be far behind?). I can’t say enough good things about the location, the topics, or the people who will be there. You can still register here , if you’re lucky :)

Search gets smarter, we get stupider

June 30th, 2008

A lot has been written lately on how intelligent search will solve all kinds of problems, most recently in The End of Theory, Chris Anderson of “long tail” fame confuses the abundance of low hanging fruit that “big search” and biotechnologies provide with the ability to really understand and extract meaning, pose and falsify or support hypothesies. Mathew Ingram takes issue with the Wired article in Google and the end of everything and Alistair Croll piles on in Does Big Search change science? emphasizing the familiar scientific refrain: correlation does not necessitate causation.

To be fair to Chris, it seems that he does understand Mathew’s point that correlation is not causation, rather his thesis seems to be that with sufficiently large datasets and powerful computational algorithms, correlation approaches causation. However I side with Mathew and Alistair, I don’t think Chris understands what Google or Rapid gene sequencing bring to scientific analysis, or he has written an excellent satirical article:

Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

It sounds like we should be able to just sit back and feed the raw data into a massive cloud computer, grab a few coffees, live a few lifetimes and get some answers (Deep Thought anyone?). As the search technology gets smarter we can all afford to get a lot stupider, as we are no longer required to solve scientific problems.

In actuality Google’s pagerank algorithm(s) and Craig Venter’s DNA shotgun sequencing techniques are successful because they are overly simplistic, designed to capture low hanging fruit as quickly as possible, they don’t solve the hard problems – rather they get us faster down a road that leads to more questions. Questions that are likely too complicated for either search engines or cute biotech tricks to answer. Requiring experiments and analyses that are too intricate and error-sensitive…that need to be hand-held, coaxed and cajoled. Science in the real world is so different from the platonic model that is taught in schoolbooks. Failure is important, errors are crucial and we progress because human thought is remarkably adaptable and resilient in the face of this. Contrast this to the types of problems we will get when our analysis is guided by bug ridden computer algorithms, infested with worms, and the data is riddled with errors and spam.

Until the computing power and the algorithms which guide it, are truly evolutionarily designed, I don’t think science will learn much from the computer. When we do get the kind of AI that Chris and the Google founders are looking for, I suspect that they will find it impossible to clock that type of artificial intelligence at Gigahertz speeds, and that we may end up re-evolving a computer that looks and acts very similar to the human brain. At which point we may regret not using the ones we already have instead.

For the next stop on this train of thought, read the excellent article Is Google Making us Stupid? I’ve got one foot in the YES camp.

Addendum: the Wired article bothered me as an epitome of reductionist scientific thought. Reductionism by nature tends to focus on the simple problems, hard problems which are complex and expensive to tackle are avoided which leads to the amplification of reductionist techniques and causes. Sooner or later you might be convinced that all knowledge is within the reach of such reductionist approaches. There is a disturbing correlated trend for industry funding of scientific research to further skew science by leaving problems without obvious economic payoffs by the wayside. I would suggest that both industrial and reductionist science are represented in the Wired hypothesis.

Cloud computing – linear utility or complex ecosystem?

June 22nd, 2008

Reuven of Enomaly speculates on whether there will be an analogue of Moore’s law for cloud computing, looking to coin “Ruv’s law.” I would like to see more detail on what it would postulate, presumably a linear relationship between growth in cloud computation and time. I think we would also agree this would need to stand the test of time before it would be considered “law.” Moore referred to a rather simple relationship between the number of transistors that can economically be used in electronic chips and time. The cloud is likely to become a very complex ecosystem, and defy simple linear rules of productivity. Rather I would expect the cloud to both behave in unexpected ways and exhibit emergent properties. On that note I am much more interested in the phase transitions, critical junctures where the properties of the system change radically, and what the underlying causes might be (technological breakthroughs, human behaviour, power shortages). I wouldn’t be shocked if the behaviour of clouds was as hard to predict as the weather (“5 day forecast calls for a 200 msec second standard deviation in latency with 10% probability of the jitters”) or the stock markets. I’m only slightly joking – my early experiences with sharing hosted grid computing resources have been variable (Mediatemple and Mosso have low cost plans). In any case I look forward to more clarity on cloud structure, composition, performance, any potential “laws” and above all the likelihood of rain… Anyone interested in a lively string of Q&A surrounding the much hyped “cloud computing” revolution should look in on the Google group for cloud computing and check what the insightful Alistair Croll of Bitcurrent has to say. Lots of folks are trying to define cloud computing these days (check out defogging the cloud for a nice simple explanation), and its hard to do partly due to a Cambrian explosion of diversity which makes the cloud(s) a fast moving target. As for me, I’m embracing the trend from the web operations trenches while keeping my sense of humour about the hype:The cloud has everything and the kitchen sink

Awesome Magnetic Visualization

June 18th, 2008

Semiconductor’s Magnetic Movie is a stunning, if questionably accurate visualization of magnetic fields and their interactions. Worth a watch:


Magnetic Movie from Semiconductor on Vimeo.

Goaltender, Bitcurrent, and Miscelania

June 11th, 2008

Finally got www.goalr.net into the Heroku private beta which will allow the private domain and e-mail functionality that our “Goaltender” application relies upon to both gather our weekly goals and followup on them.

Just posted to Bitcurrent on the future of cloud computing.

Business has been moderately insane (in a good way) and we’ll be moving offices downtown shortly. Somewhere in the last few weeks a house was purchased, Alio blogged about it on our new blog “now we are three“.

Blitzweekend project: getting real with GoalR

March 1st, 2008

Ever since I attended 37 signals “Getting Real” workshop I’ve want to put some of their design principles to the test. My company is Syntenic, a boutique consulting and services shop primarily focused on web operations, performance optimization via application delivery controllers (load balancing and server offload) and 24×7 high availability geographically distributed architecture. What this means is that we are usually helping enhance the performance and reliability of our customers applications instead of building our own. Recently this has been changing as we increasingly are redesigning or developing applications from scratch for our customers.

CodeBlitz
Blitzweekend in Montreal has given us the opportunity to give the getting real approach and Ruby on Rails in particular a whirl. Our challenge was to come up with a project idea that was feasible in one weekend including conception and coding. It didn’t take much to convince our web codemonkey extraordinaire Will Stevens to take on the challenge. Lucky for us since he has to do all the real work (including learning rails on the fly)! Alistair Croll joined on as our resident marketing genius. The onus was on me to come up with a simple enough project, which ultimately was inspired by an article I read several years ago about one of Google’s management approaches: employees were asked to list their objectives at the start of every week and report at the end of the week which of their objectives have been obtained. I’m not a fan of task management, listing and tracking to-dos can take more time than getting them done. The big challenge I have from a management perspective is keeping all my objectives or in my sights, the tasks required to complete the higher level goals seem to follow easily as long as I can keep focused on the goal. You don’t become a great goal scorer by looking at the puck, thinking about stick handling or ball dribbling. Great players keep their head up and keep their eyes on where the puck or the ball needs to go.

Aren’t you a little short for a goaltender?
Perhaps the sports analogy isn’t perfect but I’m going to run with it. The application is codenamed “goaltender” shortened to GoalR (www.goalr.net) because all the other domains were taken. GoalR e-mails you at the start of the week asking you what your objectives for week are, the goals in question. At the end of the week you are prompted to indicate which goals you managed to “score,” or worse yet admit defeat. We are looking to allow the user to score. If we have time we will introduce the concept of teams and rosters, allow team members to track, encourage even heckle their fellows in pursuit of their goals. We’d love to introduce the concept of assists, if we can figure out how that would work.

First goal of the game…
Alistair’s here, back to work (an no doubt 100 digressions a minute). Our objective, “goal” if you will, is to build a working goal management application (not another task manager!). It needs to be functional and useful by the end of this weekend, despite attending Steph’s 30th birthday party tonight. We’re keeping it real and trying to make management fun. Might be an unrealistic goal but hey, if you don’t take shots you’ll never score a goal.