Jason Grigsby : It seems like a basic concept, but the fact that links can only reliably open web pages is often forgotten.
yiibu.com: This site is a proof of concept for many of the ideas described in Rethinking the Mobile Web. You can test many of our site’s adaptive capabilities by simply resizing your desktop browser window. Certain capabilities—related to content adaptation—are however best viewed on a mobile device.
The site is well worth taking a look at (make sure to resize your browser to test it out). It shows how it is very possible to design a single set of pages that render one way on a large screen and a different way on a smaller screen.
It goes from this
Lately I’ve been playing with compass, a SaSS mixin which is derived from blueprint. Things were working quite nicely with my testing in ff and safari, but in IE6 all the text was centered. It takes some digging in compass to understand why this happens so to save others time.
<div class="container"> ... </div>
and not this
<div id="container"> ... </div>
Hardware-assisted virtualization: Hardware-assisted virtualization was first introduced on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. Virtualization was eclipsed in the late 1970s, with the advent of minicomputers that allowed for efficient timesharing,
George Santayana: Those who cannot remember the past are condemned to repeat it
My first I.T. job right out of university was as a DBA on an IBM mainframe for a large british retailer. That mainframe ran at near 100% cpu all day. The thing was amazing. It ran lots of different workloads. It ran web like applications (CICS). They got the highest priority and response times were great. Any spare capacity was devoted to batch jobs. This could be batch jobs that the business needed or it could be the developers submitting compilation jobs for code they were developing. The point is that it managed this mix of workloads beautifully, it ensured that the interrupt end user driven transactions got the priority and it soaked up any spare capacity with batch jobs that could be swapped in and out at will. Which brings me to the cloud.
There are many reasons that folks are advocating the cloud, but the most important would seem to be cost savings. The case is made in detail in this Booz Allen Hamilton paper which has this eye popping assumption that all the cost savings in their model are based on “Our analysis assumes an average utilization rate of 12 percent of available CPU capacity in the status quo environment and 60 percent in the virtualized cloud scenarios“. I’ve done a bit of digging and those two numbers seem to represent reality which is mind blowing to me on both fronts. The average server utilization in data centers today is a meager 12%, which is terrible and means that a bunch of servers are sitting there with apps that hardly anyone uses or apps that are seasonal and so most of the time aren’t hit. It also is surprising that they only assume 60% utilization in the virtualized cloud environment. Given that the mainframe could crank at 100% all day why isn’t that number higher?
I am speculating now, but I assume it isn’t higher because they are factoring in a significant overhead to hardware-assisted virtualization (like the kind that powers ec2). The very same overhead that caused hardware-assisted virtualization to get eclipsed, in the late 1970s, by computers that could do much more efficient time sharing and hence save a tonne of money. The key questions for anyone investing in building cloud infrastructures is whether economic factors will again render this technology obsolete and whether there is something that has similar characteristics but without all the overhead? Something that say could run 1000 linux instances on a single host? It’s not even the only game in town.
Werner Vogels: Today we launched a new option for acquiring Amazon EC2 Spot Instances Using this option, customers bid any price they like on unused Amazon EC2 capacity and run those instances for as long their bid exceeds the current “Spot Price.” Spot Instances are ideal for tasks that can be flexible as to when they start and stop.
Spot Instances are ideal for Amazon EC2 customers who have workloads that are flexible as to when its tasks are run. These can be incidental tasks, such as the analysis of a particular dataset, or tasks where the amount of work to be done is almost never finished, such as media conversion from a Hollywood’s studio’s movie vault, or web crawling for a search indexing company.
There’s a whole class of applications that this is a game changer for. It will be interesting to follow its adoption.
Matt Heaton: ‘While virtualization techniques have improved dramatically in the last
10 years (Think 3D support, para-virtualization for direct access to
the hardware layer, etc) there is a fundamental problem with the whole
concept of virtualization that no one ever talks about.’
I don’t know why this isn’t getting more attention. You can cram a lot of shared hosts on a single box. You can get nowhere near the same number of guest operating systems using virtualization. As a data point, I am paying $6.95 per month for shared hosting, I am running 4 domains and countless applications. To keep this blog running continually on amazon elastic cloud would be $70 per month and a lot more labor.
As an aside, it is also worth noting that certain language runtimes are way better suited to shared hosting than others. Ones that support fork, shared libraries, process isolation and start fast, good. Those that start slow, rely on threads and don’t share memory not so good. hmmmmm