Big Tin

Big tin: IT infrastructure used by organisations to run their businesses. And other stuff too when I feel like it…

Why some websites are deliberately designed to be insecure

Passwords remain the bane of our lives. People print them out, they re-use them or slightly change them for different services, or they simply use the same one for decades.

On the other side of the coin, for most users of a service, it’s a pain to remember a password and a bigger pain to change it – and then have to remember a new one all over again as websites change, they get hacked and/or their security policy changes.

But while passwords are imperfect, they’re the least worst option in most cases for identifying and authenticating a user, and the best way of making them more secure is to use each password for only one site, and make them long, complex, and hard to guess. However, some websites are purposely designed to be less secure by subverting those attempts. Here’s why.

2FA doesn’t work

Two-factor (2FA) authentication is often seen as a more secure method of authentication but it’s patchy at best. For example, the division between enterprise and personal environments has today all but evaporated. In the course of their jobs, people increasingly access their personal services at work using their personal devices. And employers can’t mandate 2FA for access to Facebook, for example, which might well be the chosen method of communication of a key supplier, or a way of communicating with potential customers. All FB wants is a password, and it’s not alone.

Two-factor authentication is also less convenient and takes more time. You’re prepared to tolerate this when accessing your bank account because, well, money. For most other, less important services, adding barriers to access is likely to drive users into the arms of the competition.

Password persistence

So we’re stuck with passwords until biometrics become a pervasive reality. And maybe not even then – but that’s a whole other issue. The best solution I’ve come up with to the password problem is a password manager. Specifically, KeePass, which is a free, open-source, cross-platform solution with a healthy community of third-party developers of plug-ins and utilities.

You only have to remember one password and that gets you access to everything. So you only have to remember one single master password or select the key file to unlock the whole database. And as the website says: “The databases are encrypted using the best and most secure encryption algorithms currently known (AES and Twofish).”

It works on your phone, your PC, your Mac, your tablet – you name it, and it’ll generate highly secure passwords for you, customised to your needs. So what’s not to like?

Pasting problems

Here’s the rub: some websites think they’re being more secure by preventing you from pasting a password into their password entry fields. Some website security designers will argue that that passwords to access their service should not be stored in any form. But a password manager works by pasting passwords into the password login field.

The rationale for preventing password pasting is that malware can snoop the clipboard and pass that information back to the crooks. But this is using a sledgehammer to crack a nut because KeePass uses an obfuscation method to ensure the clipboard can’t be sniffed. And it will clear the password in a very short time – configurable by you – so that the exposure time can be very short; 10 seconds will do it.

In addition, as Troy Hunt, a Microsoft MVP for Developer Security, points out: “the irony of this position is that [it] makes the assumption that a compromised machine may be at risk of its clipboard being accessed but not its keystrokes. Why pull the password from memory for the small portion of people that elect to use a password manager when you can just grab the keystrokes with malware?”

In other words, preventing pasting is counter-productive; it’s reducing security. Don’t believe me? Check out this scenario.

Insecure by design

So if you can’t paste a password in, what do you do? If you use a password manager, which is probably the most secure way of storing passwords today and puts you way ahead of the game, you open up the entry for that service in KeePass, expose the password to any prying eye that happens to be passing, and copy in the password – which is likely to be long and complex – manually, character by character. Probably takes a few minutes.

Can you see anything wrong with that? If you’re sitting in a crowded coffee shop, for example?

Yup. A no-paste policy is annoying, slow, prone to mistakes, and highly insecure. Worse, it’s likely to be the security-conscious – those using password managers and the like – who are most affected. Even a simple file full of passwords – hopefully encrypted – and tucked away in an obscure location is likely to be more secure than the method many if not most people use: re-using common, easily memorable passwords.

I’ve had discussions about this with one major UK bank which implemented a no-paste policy and seems since to have reversed course – whether as a result of my intervention (and no doubt that of others too) I have no way of knowing.

Say no to no-paste

So if you encounter a website that does not allow you to paste in a password in a mistaken bid to add security, point out to them that in effect, they’re forcing people to use weak passwords that they can remember, which will be less secure.

As Troy Hunt says: “we’ve got a handful of websites forcing customers into creating arbitrarily short passwords then disabling the ability to use password managers to the full extent possible and to make it even worse, they’re using a non-standard browser behaviour to do it!”

Advertisements

Filed under: passwords, Security, Technology

Is the cloud letting consumers down?

The promise of cloud services has, by and large, been fulfilled. Back in the day, and right up to the present day still, the big issue has been security: is your data safe?

What this question is really asking is whether you can retrieve your data quickly in the event of a technological melt-down. You know the kind of thing: an asteroid hits your business premises, a flood or fire makes your office unusable for weeks or months, or some form of weird glitch or malware makes your data unavailable, and you need to restore a backup to fix it.

All these scenarios are now pretty much covered by the main cloud vendors so, from a business perspective, what’s not to like?

Enter the consumer

Consumers – all of us, in other words – are also users of cloud services. Whether your phone uploads photos to the manufacturer’s cloud service, or you push terabytes of multimedia data up to a big provider’s facility, the cloud is integrated into everything that digital natives do.

The problem here is that, when it comes to cloud services, you get what you pay for. Enterprises will pay what it takes to get the level of service they want, whether it’s virtual machines for development purposes that can be quick and easy to set up and tear down, or business-critical applications that need precise configuration and multiple levels of redundancy.

Consumers on the other hand are generally unable to pay enterprise-level cash but an increasing number have built large multimedia libraries and see the cloud as a great way of backing up their data. Cloud providers have responded to this demand in various ways but the most common is a bait-and-switch offer.

Amazon’s policy changes provide the latest and arguably the most egregious example. In March 2015, it initiated, all for just £55 a year, an unlimited data storage service, not just photos as Google and others were already offering. Clearly many people saw this as a massive bargain and, although figures are not publicly available, many took it up.

Amazon dumps the deal

But in May 2017, just over two years later, Amazon announced that the deal was going to be changed, and subscribers would have to pay on a per-TB basis instead. This was after many subscribers – according to user forums – had uploaded dozens of terabytes over a period of months at painfully slow, asymmetrical data rates.

Now they are offered on a take it or leave it basis an expensive cloud service – costing perhaps three or four times more depending on data volumes – and a whole bunch of data that it will be difficult to migrate. On Reddit, many said they have given up on cloud providers and are instead investing in local storage.

This isn’t the first time such a move has been made by a cloud provider: bait the users in, then once they’re committed, switch the deal.

Can you trust the cloud?

While cloud providers are of course perfectly at liberty to change their terms and conditions according to commercial considerations, it’s hard to think of any other consumer service where such a major change in the T&Cs would be implemented because of the fear of user backlash. Especially by one of the largest global providers.

The message that Amazon’s move transmits is that cloud providers cannot be trusted, and that a deal that looks almost too good to be true will almost certainly turn out to be just so, even when it’s offered by a very large service provider who users might imagine would be more stable and reliable. That the switch comes at a time when storage costs continue to plummet makes it all the more surprising.

In its defence, Amazon said it will honour existing subscriptions until they expire, and only start deleting data 180 days after expiry.

That said, IT companies need to grow up. They’re not startups any more. If they offer a service and users in all good faith take them up on it, as the commercial managers at Amazon might have expected, they should deal with it in a way that doesn’t potentially have the effect of destroying faith and trust in cloud providers.

It’s not just consumers who are affected. It shouldn’t be forgotten that business people are also consumers and the cloud purchasing decisions they make are bound to be influenced to a degree by their personal experiences as well as by business needs, corporate policy and so on.

So from the perspective of many consumers, the answer to the question of whether you can trust the cloud looks pretty equivocal. The data might still be there but you can’t assume the service will continue along the same or similar lines as those you originally signed up to.

Can you trust the cloud? Sometimes.

Filed under: Cloud computing, Consumer, Storage, Technology

AVM Fritz!Box 4040 review

AVM Fritz!Box 4040

AVM Fritz!Box 4040

AVM’s Fritz!Box range of routers has long offered a great range of features and are, in my experience, highly reliable.

The 4040 sits at the top end of the lower half of AVM’s product line-up. The top half includes DECT telephony features but if you’ve already got a working cordless phone system, you can live without that.

The 4040 looks like all the other Fritz!Box devices: a red and silver streamlined slim case without massive protuberances that would persuade you to hide the device from view. A couple of buttons on the top control WPS and WLAN, while indicators show status, with the Info light moderately configurable; it would be helpful if AVM broadened the possible uses of this indicator.

At the back are four 1Gbps LAN ports which you can downgrade individually for power-saving reasons to 100Mbps, and a WAN port. A couple of USB ports are provided too, one 3.0, one 2.0.

The 4040 supports all forms of DSL, either directly or via an existing modem or dongle, WLAN 802.11n and 11ac, both 2.4GHz and 5GHz. The higher frequency network provides connectivity at up to a theoretical 867Mbps; I managed to get 650Mbps with my phone right next to the access point.

Power-saving modes are available for the wireless signal too – it automatically reduces the wireless transmitter power when all devices are logged off – providing a useful saving for a device you’re likely to leave switched on all the time.

Security is catered for by MAC address filtering on the wireless LAN, and by a stateful packet inspection firewall with port sharing to allow access from the Internet.

The software interface is supremely easy to use and handsome too. The overview screen gives an at-a-glance of the status of the main features: the Internet connection, devices connected to the network, the status of all interfaces, and of the NAS and media servers that are built into the router.

The NAS feature allows you to connect storage to the router over USB only and access it from anywhere either over UPnP, FTP or SMB (Windows networking). Other features include Internet-only guest access which disables access to the LAN, an IPSec VPN, and Wake on LAN over the Internet.

The Fritz!Box 4040 is the latest in a long line of impressive wireless routers, continuing AVM’s tradition of high quality hardware and software, and it’s good value at around £85.

Filed under: Product, Review, Technology, , , , ,

How to stay safe on the Internet – trust no-one

key
Working close to the IT industry as I do, it’s hard to avoid the blizzard of announcements and general excitement around the growth of the Internet of things allied to location-based services. This, we are told, will be a great new way to market your goods and services to consumers.

You can get your sales assistants to greet shoppers by name! You can tell them about bargains by text as they walk past your store! You might even ring them up! Exclamation marks added for general effect.

But here’s the thing. Most people don’t trust big corporations any more, according to the recently published 2013 IT Risk/Reward Barometer report. Instead, finds this international study: “Across all markets surveyed, the vast majority of consumers worry that their information will be stolen (US: 90%, Mexico: 91%, India: 88%, UK: 86%).”

As a result, blizzard marketing of the kind that triangulation technologies now permits makes people feel uneasy at best and downright annoyed at worst. People ask themselves questions about who has their data, how they got it, and what control they have over that data once it’s escaped into the ether.

From ICASA’s point of view, this is largely the fault of individuals who don’t control their passwords properly or otherwise secure their systems. It’s an auditing organisation, so that’s not an unusual position to adopt. But I think it goes further than that.

As the study also points out: “Institutional trust is a critical success factor in an increasingly connected world. […] Organisations have much work to do to increase consumer (and employee) trust in how personal information is used.”

In other words, companies need to work harder at winning your trust. Does that make you feel any better?

This is clearly not an issue that will be solved – ever. For every ten organisations that are trustworthy and manage personal data responsibly – you do read that text-wall of privacy policy each time you log onto a new site, don’t you? – there will be one that doesn’t. Even if all companies were trustworthy, people will still make mistakes and hackers will win the security battle from time to time, resulting in compromised personal data.

The only rational policy for the rest of us to adopt is to trust none of them, and that is what this study shows most people tend to do.

The least you should do is to use long, complex passwords and change them regularly, using a password safe (eg KeePass) so you don’t have commit them to memory – or worse, bits of paper.

FYI, the study was conducted by ICASA, which describes itself “an independent, nonprofit, global association, ISACA engages in the development, adoption and use of globally accepted, industry-leading knowledge and practices for information systems.”

Filed under: data protection, Security, Technology, , , ,

Storage roundup with PernixData, Arista Networks and Tarmin

There’s been a bit of a glut of storage announcements recently, so here’s a quick round-up the more interesting ones over recent weeks.

PernixData
This company is thinking ahead to a time when large proportions of servers in datacentres will have flash memory installed inside them. Right now, most storage is configured as a storage pool, connected via a dedicated storage network but this is sub-optimal for virtualised servers which generate large amounts of IOPS.

So instead, companies such as Fusion-io have developed flash memory systems for servers, so that data is local and so can be accessed much more quickly. This abandons one of the advantage of the storage network, namely storage sharing.

So PernixData has created FVP (Flash Virtualization Platform), a software shim that sits in the hypervisor and links the islands of data stored in flash memory inside each of the host servers. The way it works is to virtualise the server flash storage so it appears as a storage pool across physical hosts. Adding more flash to vSphere hosts – they have to be running VMware’s hypervisor – prompts FVP to expand the pool of flash. According to the company, it works irrespective of the storage vendor.

What this effectively does is to create a cache layer consisting of all the solid-state storage in the host pool that can boost the performance of reads and writes from and to the main storage network.

The company reckons that: “For the first time ever, companies can scale storage performance independent of storage capacity using server side flash.” And according to CEO Poojan Kumar: “We allow all virtual machines to use every piece of flash memory. The result is 17 times lower latency and 12 times more IOPS. It’s non-disruptive, it looks very simple and is easy to use.” It costs US$7,500 per physical host or US$10k for four hosts – a price point designed for smaller businesses.

It seems like a pretty good idea, and there’s some real-world testing info here.

Arista Networks
Also new on the hardware front are products from Arista Networks.

This company started life a few years ago with a set of high performance network switches that challenged the established players – such as Cisco and Juniper – by offering products that were faster, denser, and cheaper per port. Aimed at the high performance computing market, which includes users such as life science projects, geological data, and financial institutions, they were the beachhead to establish the company’s reputation, something it found easy given that its founders included Jayshree Ullal (ex-Cisco senior vice-president) and Andy Bechtolsheim (co-founder of Sun Microsystems).

I recently spoke to Doug Gourlay, Arista’s vice-president of systems engineering, about the new kit, which Gourlay reckoned mean that Arista “can cover 100% of the deployment scenarios that customers come to us with”. He sees the company’s strength as its software, which is claimed to be “self-healing and very reliable, with an open ecosystem and offering smart upgrades”.

The new products are the 7300 and 7250 switches, filling out the 7000 X Series which, the company claims, optimises costs, automates provisioning, and builds more reliable scale-out architectures.

The main use cases of the new systems are for those with large numbers of servers of small datacentres, and for dense, high performance computing render farms, according to Gourlay. They are designed for today’s flatter networks: where a traditional datacentre network used three layers, a modern fabric type of network will use just two layers to offer the fewest numbers of hops from one server to any other server. In Arista-speak, the switches attaching directly to the servers and directing traffic between them are leaves, while the core datacentre network is the spine.

The 7300 X series consists of three devices, with the largest, the 21U 7316 offering 16 line card slots with 2,048 10Gbps ports, or 512 40Gbps ports. Claimed throughput is 40Tbps. The other two in the series, the 7308 and 7304 accommodate eight and four linecards respectively, with decreases in size (21U and 8U) and throughput (20Tbps and 10Tbps).

The 2U, fixed configuration 7250QX-64 offers 64 40Gbps ports or 256 10Gbps ports, and a claimed throughput of up to 5Tbps. All systems and series offer reversible airflow for rack positioning flexibility and a claimed latency of two microseconds. Gourlay claimed this device offers the highest port density in the world.

Tarmin
Tarmin was punting its core product, Gridbank, at the SNW show. It’s an object storage system with bells on.

Organisations deploy object storage technology to manage very large volumes of unstructured data – typically at the petabyte scale and above. Such data is created not just by workers but more so from machines. Machine generated data comes from scientific instrumentation, including seismic and exploration equipment, genomic research tools and medical sensors, industrial sensors and meters, to cite just a few examples.

Most object storage systems restrict themselves to managing the data on disk, and leave other specialist systems such as analytics tools to extract meaningful insights from the morass of bits but what distinguishes Tarmin is that Gridbank “takes an end to end approach to the challenges of gaining value from data,” according to CEO Shahbaz Ali.

He said: “Object technologies provide metadata but we go further – we have an understanding of the data which means we index the content. This means we can analyse a media file in one of the 500 formats we support, and can deliver information about that content.”

In other words, said Ali: “Our key differentiator is that we’re not focused on the media like most storage companies, but the data – we aim to provide transparency and independence of data from media. We do data-defined storage.” He called this an integrated approach which means that organisations “don’t need an archiving solution, or a management solution” but can instead rely on Gridbank.

All that sounds well and good but one of the biggest obstacles to adoption has to be the single sourcing of a technology that aims to manage all your data. It also has very few reference sites (I could find just two on its website) so it appears that the number of organisations taking the Tarmin medicine is small.

There are also of course a number of established players in the markets that GridBank straddles, and it remains to be seen if an end-to-end solution is what organisations want, when integrating best of breed products avoids proprietary vendor lock-in, to which companies are more sensitive than ever and is more likely to prove better for performance and flexibility.

Filed under: Storage, Technology, , , , ,

Innergie mMini DC10 twin-USB charging car adapter

Innergie adapter 1

Clean design


We all travel with at least two gadgets these days – or is it just me? What you too often don’t think about though is that each widget adds to the task of battery management. The Innergie 2A adapter’s twin USB charging ports will help.
Twin USB ports

Twin USB ports


The company sent me a sample to try and I found the design to be clean and tidy, and it all works as expected. It’s also quite compact, measuring 70mm long from tip to tail, and protruding from the car’s power socket by just 28mm. This means it won’t take up too much precious space, an issue especially if the power socket is mounted in the glovebox.
Innergie adapter 2

Nice shiny contact


When activated the front lights up a pleasing blue, and it then allows you to charge your USB-fitted devices to its max 2A potential. This means that if your device’s battery capacity is 2,000mAh, which is reasonably typical, it’ll take an hour (in theory) to recharge from empty.

Officially, it costs £19 (probably less on the street), and there’s more about it here.

Filed under: Consumer, Technology, , , ,

Hard disks and flash storage will co-exist – for the moment

When it comes to personal storage, flash is now the default technology. It’s in your phone, tablet, camera, and increasingly in your laptop too. Is this about to change?

I’ve installed solid-state disks in my laptops for the last three or so years simply because it means they fire up very quickly and – more importantly – battery life is extended hugely. My Thinkpad now works happily for four or five hours while I’m using it quite intensively, where three hours used to be about the maximum.

The one downside is the price of the stuff. It remains stubbornly stuck at 10x or more the price per GB of spinning disks. When you’re using a laptop as I do, with most of my data in the cloud somewhere and only a working set kept on the machine, a low-end flash disk is big enough and therefore affordable: 120GB will store Windows and around 50GB of data and applications.

From a company’s point of view, the equation isn’t so different. Clearly, the volumes of data to be stored are bigger but despite the blandishments of those companies selling all-flash storage systems, many companies are not seeing the benefits. That’s according to one storage systems vendor which recently announced the results of an industry survey.

Caveat: industry surveys are almost always skewed because of sample size and/or the types of questions asked, so the results need to be taken with a pinch – maybe more – of salt.

Tegile Systems reckons that 99 percent of SME and enterprise users who are turning to solid state storage will overpay. They’re buying more than they need, the survey finds, at least according to the press release, which wastes no time by mentioning in its second paragraph that the company penning the release just happens to have the solution. So shameless!

Despite that, I think Tegile is onto something. Companies are less sensitive to the price per GB than they are to the price / performance ratio, usually expressed in IOPS, which is where solid-state delivers in spades. It’s much quicker than spinning disks at returning information to the processor, and it’s cheaper to run in terms of its demands on power and cooling.

Where the over-payment bit comes in is this (from the release): “More than 60% of those surveyed reported that these applications need only between 1,000 and 100,000 IOPS. Paying for an array built to deliver 1,000,000 IOPS to service an application that only needs 100,000 IOPS makes no sense when a hybrid array can service the same workload for a fraction of the cost.”

In other words, replacing spinning disks with flash means you’ve got more performance than you need, a claim justified by the assertion that only a small proportion of the data is being worked on at any one time. So, the logic goes, you store that hot data on flash for good performance but the rest can live on spinning disks, which are much cheaper to buy. In other words, don’t replace all your disks with flash, just a small proportion, depending on the size of your working data set.

It’s a so-called hybrid solution. And of course Tegile recommends you buy its tuned-up, all-in-one hybrid arrays which saves you the trouble of building your own.

Tegile is not alone in the field, with Pure Storage having recently launched in Europe. Pure uses ordinary consumer-grade disks, which should make it even cheaper although price comparisons are invariably difficult due to the ‘how long is a piece of string?’ problem.

There are other vendors too but I’ll leave you to find out who they are.

From a consumer point of view though, where’s the beef? There’s a good chance you’re already using a hybrid system if you use a recent desktop or laptop, as a number of hard disk manufacturers have taken to front-ending their mechanisms with flash to make them feel more responsive from a performance perspective.

Hard disks are not going away as the price per GB is falling just as quickly as it is for flash, although its characteristics are different. There will though come a time when flash disk capacities are big enough for ordinary use – just like my laptop – and everyone will get super-fast load times and longer battery life.

Assuming that laptops and desktops survive at all. But that’s another story for another time.

Filed under: Data centre, desktops, Laptop, Storage, Technology, , , , , ,

Whom do you trust?

Keeping your data secure is something you need to be constantly aware of. Apart from the army of people out there who actively seek your credit card and other financial and personal details, not to mention the breadcrumbs that accumulate to a substantial loaf of data on social media, it’s too easy to give the stuff away on your own.

It’s really all about trust. We’re not very good at choosing whom we trust, as we tend to trust people we know – or even people we have around us sometimes. As an example, I present a little scenario I encountered yesterday on a train.

The train divides en route, so to get to your destination, you need to be in the right portion of the train. An individual opposite me sat for 45 minutes through seemingly endless announcements – from the guard, the scrolling dot matrix screens, and the irritatingly frequent, automated announcements – all conveying the same information both before, during and after the three or four stops before we arrived at the decision point about which bit of the train to be in.

At the station where a decision had to be made, she leaned across and asked if she was in the right portion of the train for her destination.

Why? She would rather trust other passengers than the umpteen announcements. She’s not alone, as I’ve seen this happen countless times.

So it’s all about whom you trust. As passengers, we were trustworthy.

So presumably were the security researchers with clipboards standing at railway stations asking passengers for their company PC’s password in exchange for a cheap biro. They gathered plenty of passwords.

I recently left a USB phone charger in a hotel belonging to a major international chain. They said they would post it back if I sent them a scanned copy of my credit card to cover the postage. That they offered suggests there must be plenty of people willing to take the gamble that their email won’t be read by someone who shouldn’t. Not to mention what happens after the hotel has finished with the data. Can they be sure the email would be securely deleted?

I declined the offer and suggested that this major chain could afford the £7 it would cost to pop it in the post. Still waiting, but not with bated breath. I don’t trust them.

Filed under: data protection, Security, Technology

2012: the tech year in view (part 2)

Here’s part 2 of my round-up of some of the more interesting news stories that came my way in 2012. Part 1 was published on 28 December 2012.

Datacentre infrastructure
NextIO impressed with its network consolidation product, vNet. This device virtualises the I/O of all the data to and from servers in a rack, so that they can share the bandwidth resource which is allocated according to need. It means that one adapter can look like multiple virtual adapters for sharing between both physical and virtual servers, with each virtual adapter looking like a physical adapter to each server. The main beneficiaries, according to the company, are cloud providers, who can then add more servers quickly and easily without having to physically reconfigure their systems and cables. According to the company, a typical virtualisation host can be integrated into the datacentre in minutes as opposed to hours.

In the same part of the forest, the longer-established Xsigo launched a new management layer for its Data Center Fabric appliances, its connectivity virtualisation products. This allows you to see all I/O traffic across all the servers, any protocol, and with a granularity that ranges from specific ports to entire servers.

Nutanix came up with a twist on virtualisation by cramming all the pieces you need for a virtualisation infrastructure into a single box. The result, says the company, is a converged virtualisation appliance that allows you to build a datacentre with no need for separate storage systems. “Our mission is to make virtualisation simple by eliminating the need for network storage,” reckons the company. Its all-in-one appliances mean faster setup and reduced hardware expenditure, the company claims. However, like any do-it-all device, its desirability depends on how much you value the ability to customise over ease of use and setup. Most tend to prefer separates so they can pick and choose.

Cooling servers is a major problem: it costs money and wastes energy that could be more usefully employed doing computing. This is why Iceotope has developed a server that’s entirely enclosed and filled with an inert liquid: 3M Novec 7300. This convects heat away from heat-generating components and is, according to chemical giant 3M, environmentally friendly and thermally stable. The fluid needs no pumping, instead using convection currents to transport heat and dump it to a water-filled radiator. The water is pumped but, Iceotope says, you need only a 72W pump for a 20kW cabinet of servers, a far cry from a typical 1:1 ratio of cooling energy to compute power when using air as the transmission medium.

Networking
Vello Systems launched its Data Center Gateway incorporating VellOS, its operating system designed for software-defined networking (SDN) – probably the biggest revolution in network technology over the last decade. The box is among the first SDN products – as opposed to vapourware – to emerge. The OS can manage not just Vello’s own products but other SDN compliant systems too.

Cloud computing
One of the highlights of my cloud computing year was a visit to Lille, to see one of OVH‘s datacentres. One of France’s biggest cloud providers, OVH is unusual in that it builds everything itself from standard components. You’ll find no HP, IBM or Dell servers here, just bare Supermicro motherboards in open trays, cooled by fresh air. The motivation, says the company comes from thefact there are no external investors and a high level of technical and engineering expertise at the top. Effectively, the company does it this way because it has the resources to do so, and “because we are techies and it’s one of our strong values.” The claimed benefit is lower costs for its customers.

I had an interesting discussion with Martino Corbelli, the chief customer officer at Star, a UK-based cloud services provider. He said that the UK’s mid-market firms are getting stuck in bad relationships with cloud services vendors because they lack both the management and negotiation skills required to handle issues and the budget to cover the cost of swapping out.

“The industry for managed services and cloud is full of examples of people who over promise and under deliver and don’t meet expectations,” he said, reckoning that discussions with potential customers now revolve more around business issues than technology. “Now it’s about the peer-to-peer relationship,” he said. “Can you trust them, are you on the same wavelength, do you feel that your CFO can call their CFO and talk to them as equals?”

We also saw the launch of new cloud computing providers and services from mid-market specialist Dedipower, CloudBees with a Java-based platform service, and Doyenz with a disaster recovery service aimed at smaller businesses.

Storage
Coraid boasted of attracting over 1,500 customers for its unique ATA-over-Ethernet (AoE) storage products. This means that storage is using native Ethernet rather than storage-specific protocols. Coraid reckons this reduces protocol overheads and so is three to five times faster than iSCSI. The company makes a range of storage systems but, although AoE is an open standard, no other company is designing and selling products with it.

WhipTail joined the growing list of vendors selling all-flash storage systems with its Accela products. Solid-state gives you huge performance advantages but the raw storage (as opposed to the surrounding support infrastructure) costs ten times as much compared to spinning disks, so the value proposition is that the added performance allows you to make more money.

Eventually, the bulk of storage will be solid-state, as the price comes down, with disk relegated to storing backups, archives and low-priority data, but that time has yet to come. It’s a delicate balancing operation for companies such as WhipTail and Violin Memory: they don’t want to be too far ahead of the mass market and don’t want to miss the boat when flash storage becomes mainstream.

Filed under: Business, Cloud computing, Data centre, Enterprise, Networking, operating systems, Product launch, Storage, Systems management, Technology, , , , , , , , , , , , ,

2012: the tech year in view (part 1)

As 2012 draws to a close, here’s a round-up of some of the more interesting news stories that came my way this year. This is part 1 of 2 – part 2 will be posted on Monday 31 December 2012.

Storage
Virsto, a company making software that boosts storage performance by sequentialising the random data streams from multiple virtual machines, launched Virsto for vSphere 2.0. According to the company, this adds features for virtual desktop infrastructures (VDI), and it can lower the cost of providing storage for each desktop by 50 percent. The technology can save money because you need less storage to deliver sufficient data throughput, says Virsto.

At the IPExpo show, I spoke with Overland which has added a block-based product called SnapSAN to its portfolio. According to the company, the SnapSAN 3000 and 5000 offer primary storage using SSD for cacheing or auto-tiering. This “moves us towards the big enterprise market while remaining simple and cost-effective,” said a spokesman. Also, Overland’s new SnapServer DX series now includes dynamic RAID, which works somewhat like Drobo’s system in that you can install differently sized disks into the array and still use all the capacity.

Storage startup Tegile is one of many companies making storage arrays with both spinning and solid-state disks to boost performance and so, the company claims boost performance cost-effectively. Tegile claims it reduces data aggressively, using de-duplication and compression, and so cuts the cost of the SSD overhead. Its main competitor is Nimble Storage.

Nimble itself launched a so-called ‘scale to fit’ architecture for its hybrid SSD-spinning disk arrays this year, adding a rack of expansion shelves that allows capacity to be expanded. It’s a unified approach, says the company, which means that adding storage doesn’t mean you need to perform a lot of admin moving data around.

Cloud computing
Red Hat launched OpenShift Enterprise, a cloud-based platform service (PaaS). This is, says Red Hat, a solution for developers to launch new projects, including a development toolkit that allows you to quickly fire up new VM instances. Based on SE Linux, you can fire up a container and get middleware components such as JBoss, php, and a wide variety of languages. The benefits, says the company, are that the system allows you to pool your development projects.

Red Hat also launched Enterprise Virtualization 3.1, a platform for hosting virtual servers with up to 160 logical CPUs and up to 2TB of memory per virtual machine. It adds command line tools for administrators, and features such as RESTful APIs, a new Python-based software development kit, and a bash shell. The open source system includes a GUI to allow you to manage hundreds of hosts with thousands of VMs, according to Red Hat.

HP spoke to me at IPExpo about a new CGI rendering system that it’s offering as a cloud-based service. According to HP’s Bristol labs director, it’s 100 percent automated and autonomic. It means that a graphics designer uses a framework to send a CGI job to a service provider who creates the film frame. The service works by estimating the number of servers required, sets them up and configures them automatically in just two minutes, then tears them down after delivery of the video frames. The evidence that it works can apparently be seen in the animated film Madagascar where, to make the lion’s mane move realistically, calculations were needed for 50,000 individual hairs.

For the future, HP Labs is looking at using big data and analytics for security purposes and is looking at providing an app store for analytics as a service.

Security
I also spoke with Rapid7, an open-source security company that offers a range of tools for companies large and small to control and manage the security of their digital assets. It includes a vulnerability scanner, Nexpose, a penetration testing tool, Metasploit, and Mobilisafe, a tool for mobile devices that “discovers, identifies and eliminates risks to company data from mobile devices”, according to the company. Overall, the company aims to provide “solutions for comprehensive security assessments that enable smart decisions and the ability to act effectively”, a tall order in a crowded security market.

I caught up with Druva, a company that develops software to protect mobile devices such as smartphones, laptops and tablets. Given the explosive growth in the numbers of end-user owned devices in companies today, this company has found itself in the right place at the right time. New features added to its flagship product inSync include better usability and reporting, with the aim of giving IT admins a clearer idea of what users are doing with their devices on the company network.

Networking
Enterasys – once Cabletron for the oldies around here – launched a new wireless system, IdentiFi. The company calls it wireless with embedded intelligence offering wired-like performance but with added security. The system can identify issues of performance and identity, and user locations, the company says, and it integrates with Enterasys’ OneFabric network architecture that’s managed using a single database.

Management
The growth of virtualisation in datacentres has resulted in a need to manage the virtual machines, so a number of companies focusing on this problem have sprung up. Among them is vKernel, whose product vOPS Server aims to be a tool for admins that’s easy to use; experts should feel they have another pair of hands to help them do stuff, was how one company spokesman put it. The company, now owned by Dell, claims it has largest feature set for virtualisation management when you include its vKernel and vFoglight products, which provide analysis, advice and automation of common tasks.

Filed under: Business, Cloud computing, data protection, Enterprise, mobile, Networking, Product, Product launch, Security, Servers, Storage, Systems management, Technology, , , , , , , , , ,

Manek’s twitter stream