Looking For A Show? Preamp.fm Builds A Music Video Playlist Of Bands Playing Nearby

preamp

Oh, my. I think someone has actually done it.

Someone has built a system that promises to help me find fun things to do near me that… actually makes me want to do fun things near me.

Like many a concept before it, Preamp.fm is so simple, yet so clever, that it drives me crazy that I didn’t think of it first. Preamp.fm finds a bunch of artists playing shows in venues near you, then builds an old-school MTV-esque video playlist with one video for each band in hopes that you’ll like one enough to go see the show.

Simple, right? But it works so well.

Maybe I’m lame. Maybe I’m just not very good at this whole concert-going thing. Whatever the case, I tend to not go see shows unless I hear that a band I already love is coming to town. Of the half dozen-or-so concerts I go to each year, almost all of them are bands I’ve already been listening to for the better part of a decade. Most of them are bands I’ve seen a bunch of times already. If variety is the spice of life, my concert hopping life is… pretty friggin’ bland.

In the first thirty minutes or so of playing with Preamp.fm, I’ve already found 3 new bands I want to go see.

Here’s how Preamp.fm works: You pick your city, and it starts playing videos from bands playing soon. That’s it. You can skip to the next band, bring up a big list of all the shows they’ve found happening over the next two weeks, or just sit back and let’em roll. In the bottom left of each video is a “Get Tickets” button that pushes you to the venue’s purchasing system.

And therein lies their business model: they’ll collect a commission on tickets, whenever a venue allows it. If they can get you to buy a ticket, they might make a few bucks off the sale. Meanwhile, all of the videos — the heaviest part of the site, traffic wise — are being piped in from Youtube, so Preamp.fm’s hosting costs are presumably pretty dang low. The users finds new bands, the bands get more fans and exposure, the venues make more sales, and Preamp makes money for bringing them all together. Everyone wins! Hurray!

Alas, Preamp.fm isn’t a nationwide thing yet. In fact, it only works in four major cities right now: Los Angeles, New York City, Washington D.C., and San Francisco. Every city has different venues, and every venue shares their upcoming shows differently, so building its coverage out quickly might prove to be something of a challenge. Preamp.fm is hoping to find a “global crew of local music experts” to help them curate things.

Is it perfect? Nah — of course not — but remember, it just launched, built by a small, bootstrapped team. It’s not going to have every venue right off the bat, even in the cities that it supports. It also doesn’t seem to tell you when a show is already sold out — something which might get a bit annoying, if you’re consistently finding bands you like only to be unable to get tickets. But this concept is excellent, and the execution so far is quite solid; if they can figure out how to scale it up, I see myself using it regularly.

Preamp.fm was built by Charles Worthington, a freelance developer/product designer out of Washington, DC. You can check out Preamp.fm here.

The Weekly Roundup for 06.24.2013

The Weekly Roundup for 12032012

You might say the week is never really done in consumer technology news. Your workweek, however, hopefully draws to a close at some point. This is the Weekly Roundup on Engadget, a quick peek back at the top headlines for the past seven days — all handpicked by the editors here at the site. Click on through the break, and enjoy.

Comments

Bloomberg: Nokia will buy Siemens' share of joint venture for less than $2.6b

Not all partnerships pan out, and Nokia seems ready to call it quits: according to Bloomberg, the company might announce a buy out of the German half of Nokia Siemens Networks later this week. Sources familiar with the matter say that the the Finnish firm is planning to use a bridge loan to finance the $2.6 billion purchase (less than 2 billion euros), taking the entire operation under its own wing. It’s not a completely unexpected move on Nokia’s part — the company previously avoided selling off stake in the network back in 2011, opting to lean on its own shareholders instead. Bloomberg reports that Siemens has declined to comment on the issue, but we’ll let you know if we hear anything solid.

Filed under: Misc, Mobile

Comments

Source: Bloomberg

Paul Graham's Prescription For VCs: Move Fast, Take Less Equity

paul graham

At the 500 Startups’ PreMoney Conference last week, Y Combinator’s Paul Graham gave a presentation in which he suggested a new way for Series A investments to get done. Graham provided a few suggestions for innovative early stage investors to differentiate themselves. It basically comes down to: move fast, and don’t over-invest in startups just to get a certain percentage of equity.

“One of the biggest things investors do not get about the fund raising process is what an immense cost talking to them imposes on the startups that are raising money, especially when a startup consists just of the founders. Everything completely grinds to a halt during fundraising,” Graham said at the conference.

Graham suggested that as a result, there’s room for an investor to undercut the competition by moving more quickly with early stage investments. If there existed a reputable investor who would invest $100,000 on market terms within 24 hours, they would be able to corner the market on the best startups, he said. That firm would be approached by all the worst startups as well, Graham said, but at least they’d see everything. In contrast, firms which have a reputation for taking a long time to make their investments would be approached last.

Another way that venture firms could differentiate themselves is by breaking from the typical 20 percent in equity that they ask for during Series A investments. VCs are investing too much and startups are raising too much during that fund raising period, but that could change if someone were willing to break ranks and actually invest less, but for less equity.

“I think the biggest danger for VCs, and also the biggest opportunity is in the Series A stage,” Graham said. “Right now, VCs knowingly invest too much in the Series A stage.”

When there’s a lot of competition for deals, the number that moves isn’t the amount of equity that VCs take, but the amount that they invest and thus the valuation of the company, Graham said. In the case of the most promising startups, Series A investors force companies to take more money than they want to raise.

“Some VCs lie and say that the company needs that much,” he said. “Others are more candid and admit that their business models require them to own a certain percentage of the company, but we all know that the amounts being invested are not determined by the amount that the companies need.”

It used to be that startups needed to give up that much of their company to raise money, but those days are over. With that in mind, Graham thinks that the first VC who breaks ranks and starts doing Series A investments for the amount of equity that the founder is willing to sell stands to reap huge benefits.

“If there were a reputable top tier firm that was willing to do a Series A round for as much stock as the founders wanted to sell, they would instantly get almost all the best startups,” Graham said. “And the best startups are where the money is.”

I talked to him about that theory and about how Y Combinator has scaled in the video above. (Skip ahead to about 5:30 to hear his thoughts on changing equity structures.)

Developers Are Lifting The Cloud, Not The Other Way Around

enterprise apps

For all the attention this week about the cloud, it’s evident that it is pretty much a distraction when considering what is really happening. Developers are lifting the cloud, not the other way around.

The big guns of tech are aligning because they have to. It’s a defensive move to serve their existing customer base. It’s not lke the old kings are showing  substantial revenue increases for new software licenses. But  consolidating power to offer legacy technology does show that the cloud is anything you want to call it.

In their new definition of the cloud, the IT-heavy enterprise gets a new version of that old-school database to run the software installed ten or 15 years ago.  An operaitng systgem built for the desktop and client/server age can be recast as a cloud service. Older SaaS companies can work with former on-premise foes and happily proclaim that what worked for the past 14 years will be just fine for another two generations or more. Just like cowbell, there is never enough.

But these moves to align CRM and operating systems with legacy databases are not about innovation. They are simply meant to keep the status quo and offer the bread and butter business that have earned them billions in revenue.

The real innovation is in the new genre of databases, developer frameworks,  social coding services and the APIs enriched with context through data analysis.

It’s not to say the cloud lacks value. It has plenty of that. The cloud is really all about value. Prices continue to drop for compute and storage. On Joyent, a developer can now pay by the second.

But look deep into the infrastructure and there are signs even there of the developer’s work. Hearing more about this idea of the software defined data center? It’s this concept that software, not metal switches, do the work with APIs connecting it all together. The APIs connect networks, data stores, all forms of clients and databases, etc. It’s the act of the network going to the app instead of the other way around.

So all the machines and the pipes are getting abstracted and the developer, arguably, is driving that change. The smartphone is a server. As again illustrated by Joyent with Project Manta, the big storage and network machines are now becoming part of the operating system. Compute and storage are coming together and in-memory databases make for split-second analytics.

Just.me Founder Keith Teare (who is also one of the original TechCrunch founders), said On The Gillmor Gang this week that the cloud is a constant but it does nor constantly do the same things. The place for change is looking at how it is used. The real shift is how the cloud is consumed. Some of it is apps, some of it is devices  while some data is pushed and some of it is pulled. The cloud is a data integrator (Message Bus) and a data store but not necessarily meant to be consumed just through a browser.

Andreessen Horowitz Partner Peter Levine said in an interview this past week that 15 to 20 years ago it was all Microsoft with the WinAPI. Every program and every API call was done to Windows. Now we see the entire disaggregation of the API. Companies now expect APIs.  All of that has helped accelerate development.

It is the year of the developer and that is evident with GitHub, which now has about 10,0000 subscribers signing up every day, Levine said.

For the past few years, the developer crescendo has lifted the cloud. The cloud players are interested in what the developers are doing. As Levine said, developers make it possible for us to have functioning super computers in our hands.

And so take another look when the news hits about more big legacy players happily talking about the greatness of the cloud. Sure it’s awesome but it would be meaningless if it were not for all those developers using it to make all the cool things we use eveyday on those supercomputers we have ready in our pockets to connect us to the world.

The Plasticky BlackBerry Q5 Is Not The Mid-Tier Hero Handset BB10 Needs To Save It

blackberry-q5-

In some ways the Qwerty-packing Q5, with its throwback BlackBerry looks, is a far more important device for BlackBerry than its current flagship, the all-touch Z10. Or the premium-priced Qwerty-clad Q10. The mid-tier Q5 should be priced to shift — because that’s what BlackBerry needs to happen to start regaining the ground it lost when it was forced to pause and reboot its OS to play catch-up with rivals. That’s what the Q5 should do, but will it?

The problem for BlackBerry is it may already be too late to turn things around. BlackBerry’s latest results, out late last week, made grim reading as the company missed analyst expectations, and its share price took a battering. It shipped just 2.7 million BlackBerry 10 handsets in its Q1. But it has only had two BB10 devices to sell, one of which (the Q10) only made it to market in the U.S. earlier this month. Which makes the Q5 even more important: BlackBerry needs more handsets in its portfolio attacking different price points to have a chance of ramping up sales.

The problem is the Q5 doesn’t feel like a saviour. It feels closer to a kludge. Likely it isn’t going to be cheap enough to really hit Android where it hurts (it’s mid-tier, not budget after all). Nor does it feel like enough of a leap forward to convert a new generation of users to BlackBerry. BB10 is still Blackberry playing catch-up with competitors, rather than streaking ahead in the innovation stakes.

Of course many Blackberry loyalists and long-time users aren’t going to be unhappy with the Q5′s old school Qwerty form factor. But that staid staple means it necessarily offers a crimped OS experience versus the full-touch Z10. On the Q5 — as with the Q10 — the touchscreen has had to be squashed into a square to accommodate yesteryear’s physical Qwerty keys. Which is a problem because BlackBerry’s new platform needs room for the user to manoeuvre.

BB10 is built around gestures and layering content — and that whole “peek and flow” dynamic comes into its own on a full touchscreen. But on the Q5′s small square it’s inevitably constrained. Yet, despite this squeezed screen, the Q5 is surprisingly big for a Qwerty BlackBerry. Certainly compared to past generations of RIM hardware — those ever-so-popular Curves and Bolds it apparently succeeds.

As well as being constrained by having to make room for the keyboard, space has to be found to accommodate the bevels where BB10′s gestures have to start. This makes the overall front footprint a bit, well, hefty. It looks like an oversized, top-heavy BlackBerry, which will feel like a step backwards to those accustomed to BlackBerry’s traditionally highly pocketable handsets. And who else is this Qwerty-packer really trying to woo?

Android users have so much choice when it comes to keyboard software that even if they don’t get on with the stock Android virtual keyboard they can switch to Swype, or Swiftkey or any one of the growing number of Qwerty alternatives cooking up interesting new ways to type. The Q5′s immutable plastic keys feel terribly dumb phone in comparison.

Even the BlackBerry exec demoing the Q5 at the press event I attended to pick up a review device described the physical keyboard as “infamous”  (Freudian slip?). And said he found typing on it “a bit strange” because “I’m used to typing on the [full touchscreen] Z10.” That says it all really.

The Q5 is a deeply conservative device. It continues to look backwards to BlackBerry’s legacy keyboard-chained past — a compromise between old technology and new software. And like most compromises, it’s unlikely to entirely please anybody. It’s not that it’s terrible, it just doesn’t feel good enough to make an impact — and that means it’s not good enough because BlackBerry needs something remarkable to stand out in this crowded mid-tier segment.

Yet you can see exactly how and why BlackBerry has arrived here. In its current shrunken state, as its user base and revenues have diminished, the company has had to retrench. It can’t afford to lose any more users, yet it can’t afford to ramp up the number of devices in its portfolio quickly enough — making it super important that it retains its one remaining heartland: corporate users. Those are the last really sticky BlackBerry users, even as fickle consumers have wandered off elsewhere.

So BlackBerry can’t cut its ties with the past as it’s now even more dependent on its most conservative demographic. Its focus has to be on servicing that existing corporate user-base — because their loyalty is locked up far more than the average consumer. Some 90 percent of the Fortune 500 are BlackBerry customers, according to the company. And some 60 percent are apparently trialling BB10. BlackBerry needs those bulk-buyers to migrate to BB10 and continue pumping money into its coffers. If they abandon ship BlackBerry really will be an adrift ghost ship.

Selling mobile email to corporates is how BlackBerry built its original mobile empire. And selling to corporates is where BlackBerry has had to retrench to now. An army of cheap Androids is sweeping away its other former stronghold: teens. While free, over-the-top messaging apps like WhatsApp have eroded the appeal of BBM (BlackBerry’s licensing of BBM to Android and iOS this summer also feels like too little, too late). Now, with the mid-tier-priced Q5, BlackBerry is apparently hoping to woo those kids back. But the Q5 compromises on target demographic, too.

On the one hand BlackBerry says it’s aiming the Q5 at younger users. But it also cites SMEs and corporates as targets — flagging up the Balance feature that allows segmentation of work and personal content on the device. Little wonder then that, design-wise, the Q5 looks like it’s trying not to be too much of anything, so no one feels like disowning it. If I had to use one word to describe it, it would be generic. Or plasticky. It’s as if it’s been deliberately left as blank as possible to be as inoffensive as possible — to try to appeal to as wide a group as possible. In other words: another compromise.

The other problem BlackBerry has is that corporates are famously conservative about technology upgrades, which explains why it has no plans to sunset BB 7 any time soon. Corporate investments in BES 7 “have to be protected,” as one BlackBerry spokesman put it. Which means the company has to keep supporting BB 7 and producing devices running that last-gen OS for the foreseeable future. Which stymies change, and hampers BB10′s progress as a portion of resources have to go on the old platform.

Plus, if BB 7 devices are still on offer, why should corporates risk the upheaval of upgrading to a device like the Q5? They’ll stick with what they know, and leave this compromise on the shelf. So while BlackBerry youth users are going — or have already gone — elsewhere to check out shinier hardware, its business users are foot-dragging and in no hurry to move on. Talk about being stuck between a rock and a hard place. No wonder turning this tanker is so hard.

Pricing will of course play a key role in whether the Q5 sits on shelves or not. The mid-tier is where the largest Android army roams. But carrier tariffs for the Q5 are going to need to be a lot lower than the early EE pricing of £26 a month to be competitive enough to win over consumers. That price is pitting the Q5 against iPhone 4S or Galaxy S3 tariff prices. Which makes BlackBerry’s mid-tier offering a tough sell, whichever tech camp you prefer to sit in.

Regardless of whether this middling handset ends up selling well or not, it may make little material difference to BlackBerry’s prospects. The perception that the mobile maker is now locked in a death spiral will only increase shareholder pressure on the management team, and make acquisition a more likely end. BlackBerry would need to sell an awful lot of Q5s to calm that spin.

iOS 7 FaceTime icon revealed in trademark filing

Apple’s sweeping change of iconography in iOS 7 has prompted no shortage of discussion over the upcoming iPhone and iPad interface change, and now the new FaceTime icon has been revealed too. The new icon, shown off courtesy of the US Patent and Trademark Office sticks closely to the design theme we’ve already seen Apple

Read The Full Story

Switched On: Form in the USA

Each week Ross Rubin contributes Switched On, a column about consumer technology.

DNP Switched On Form in the USA

The Mac Pro might have been worthy of the “One More Thing” kinds of reveals that Steve Jobs used to do at Apple events. Despite being foreshadowed by Tim Cook as a product the company was going to make in the US, it was virtually carted in from left field at an event that focused broadly on new operating systems before a crowd of developers that could appreciate its power. That said, it will likely require OS X Mavericks, a thematically fitting release for a product that represents a new wave in Apple’s design.

Some have said that iOS 7 may be the company’s New Coke. The Mac Pro, though, is the new can. Its cylindrical form represents a new design for Apple, albeit one that jibes with the company’s affinity for simple, rounded, iconic shapes. Like the new AirPort Extreme, it has a significant vertical profile, but is a fraction of the size of its predecessor designed to accommodate multiple optical drives and hard drives.

Filed under: Apple

Comments

Hate Comic Sans? Blame this Microsoft virtual assistant

The Windows 95 “Microsoft Bob” interface. Twenty years ago, not only did we write birthday letters, but they were so easy, a cartoon dog could tell us how to do it.
TopWindowsTutorials

It’s hard to disagree that virtual assistants have wrought little good in this world. From Clippy’s benignly stupid questions to BonziBuddy’s spyware and evil homepage resets, it’s a wonder that Siri or Google Now have been able to reclaim any goodwill. But it turns out virtual assistants have a more lasting negative impact than general annoyance: the font Comic Sans.

Comic Sans lore says that Vincent Connare, a designer on a Microsoft consumer software team, saw a working prototype of the assistant Microsoft Bob back in 1994. This featured a cartoon dog named Rover speaking in text bubbles and, incongruously for a cartoon, Rover spoke in Times New Roman.

In what can now only be described as dramatic irony, Connare was shocked and appalled at how ugly Times New Roman looked coming, ostensibly, out of a cartoon’s mouth. He decided that a more comic-like typeface was needed.

Read 6 remaining paragraphs | Comments

What Games Are: The Ludophile Mindset

consoles

Editor’s note: Tadhg Kelly is a veteran game designer, creator of leading game design blog What Games Are and creative director of Jawfish Games. You can follow him on Twitter here.

There are regular people who like music on their phones and maybe listen to the radio or Spotify. For them, music is just a back track of daily life. Then there are interested people, who like good headphones, pay for a Spotify subscription or attend the occasional concert. Beyond these two groups lie the passionate 5%, the audiophiles. They follow bands, hunt down vinyl records, read blogs or magazines about music and spend serious coin on their habit.

In the games market the picture is surprisingly similar. You have the mildly interested who play free games on their phones and social networks and the moderately interested who buy one gaming machine and a couple of games over a few years. Then you have the self-described gamers, the ludophiles (ludo- as a general term means play). They follow franchises, hunt down retro cartridges, read blogs or magazines about games and spend serious coin on their habit.

What sets them apart is just as interesting. Music fans have no issue with the status of their medium. Music is art, music is cool, music is culture. Gaming fans, on the other hand, have lots of inferiority issues. A key dynamic that recycles is the idea of the game that proves that games are as good as movies. Last year that was The Walking Dead, so far this year it’s The Last of Us.

Another difference is the relationship with technology itself. Audiophiles are very much into both the sound and experience with their music. Formats like vinyl endure because audiophiles believe they sound better (whether they actually do is a question for the ages, but the point is that they believe it). Album art, memorabilia and visibility of collection are also important. History matters. So does the ability to pull out a greatest hit from 40 years and play it. Interoperability is key.

Gaming fans, on the other hand, tend to spend their money on the latest technology and forget interoperability. Backwards compatibility is generally not a strong motivator, with the most-dedicated preferring instead to own dozens of gaming machines and play the games of a certain year on the machines for which they were intended. A bit like if Universal Music Group didn’t just develop talent and publish music, but also made devices that could only play UMG music. Ludophiles are relatively comfortable doing this.

The really interesting thing about the ludophile mindset is the ways in which it doesn’t want games to change. Ludophiles do not like mobile games. They feel pretty ambivalent about tablet games. They regard that kind of future somewhat askance and prefer to cheer for stasis than progress. This is because they buy into the story of the medium itself. Games as a hobby are as much about participating in the story of the medium itself as having fun and playing games. To the true believer games are on an evolutionary path toward somewhere, a final destination of infinite perfection, and this is to be promoted and defended at all costs.

One of my favorite teaser trailers is the one for Halo 3 from 2006. It shows the Master Chief emerging onto a desert scene. He looks to his left and the shot pans out to reveal huge starships moving over to a basin. There is an ominous rumble, cracks appear in the landscape and megalithic doors rise up to reveal a bright light. The light reaches upwards in an awe-inspiring sight before the scene blanks out and we hear the line “This is the way the world ends.”

If you wanted to find a metaphor for how the ludophile views the threat against games, it’s a bit like this. It’s high dudgeon and drama and shouts of glee when changes are averted. The ludophile wants the amazing, the epic and the awesome. The Citizen Kane of gaming. But he also wants a console to be a console, a device with a type of controller that feels like it’s heading toward perfection. He’s not enamored of divergences from the path.

While the world loved the Nintendo Wii, plenty of console gamers always got hung up on its lack of HD. The news that OUYA sold out, that Towerfall is actually rather good, that Google may be working on a microconsole and that GamePop has released a free machine tend to fall on deaf ears. Where many observers like myself think that this interoperable Android-driven approach will yield big returns for the game console in the long run, today’s console gamer is a bit nonplussed. He gets hung up on the fact that Android is for phones, and phone games are “weak” for some reason. The microconsole doesn’t fit the ideal of aiming for perfection. Like the netbook or the tablet, it seems like a big step backward.

And that would all be fine if it seemed that the console sector was a viable market. But I don’t believe it is in the long term. I think it’s merely in a new-hardware exuberant phase, but its overall prognosis is starting to look awfully like the market for record players. There will always be loyalists, but the question becomes whether there are enough of them to really shift the needle.

Ludophiles are a fixed audience with fixed ideas of what the future should be. The perfection they aspire toward feels just out of reach, but it always seems to get closer. The Last of Us, for example, really is breathtaking. But that kind of game is also enormously expensive to develop, and is now at the point where it needs to sell 4 or 5 million copies to prove its viability. The technology required to power these advances is also incredibly expensive, so much so that neither Microsoft or Sony make any money from being in the games business. Nintendo does (well, bar lately) but Nintendo makes all its own content too, so there’s greater scope for margin there.

With the next generation bringing yet more increases on the development spending side (historically, this is what tends to happen), the risks inevitably go up. So do closures, high profile failures and a reduction in diversity. There are not enough ludophiles out there to satisfy that kind of price tag indefinitely, and so decline is inevitable. However decline will not happen dramatically, all at once.

“This is the way the world ends” is taken from TS Eliot’s The Hollow Men. The line that follows it is “Not with a bang, but with a whimper.” Not with the big supernova-esque explosion of ultimate victory and defeat, but with a slow decay. The death by a thousand cuts from phones, tablets, microconsoles, PCs, social networks and wearables. And what’s fuelling that is an end to the power advantage and a large leap forward in convenience.

The other 95%, the regular and the interested, always tend to gravitate toward good-enough rather than perfection, convenience over fidelity. They bought into DVD because it seemed much better than video, but not Blu-ray. They plumbed for digital streaming instead. They bought into HDTV, but not 3DTV, and likely have very little interest in 4KTV. They bought into iPads because they’re much simpler to use than PCs, and don’t care about some of the lost potential that they’ve given up. They prefer to subscribe to Spotify because who has the room to devote acres of wall space to albums. Or books.

And the same is true for games. Tablet gaming may not adhere to the lofty goals of the quest for perfection, but it’s cheap and super convenient. Microconsoles may be powered by relatively underpowered processors but this is only really their first year. 2-3 years down the road they’ll pack enough punch and get their messaging right that they’ll start to sound irresistible to everyone bar ludophiles. What happens when the world realizes that it can play good quality games, complete with game controllers, on its tablets? What happens when we can play games, either casual, deep or anything in between, from our phones and simply stream them to our TVs?

Not with a bang…

FCC approves Google's white space wireless database

FCC approves Google's white space wireless database

Google may have been on pins and needles while the FCC scrutinized its white space wireless database over the spring, but it can relax this summer — the FCC has given the database the all-clear. The approval lets Google serve as one of ten go-to sources for white space devices needing safe frequencies in the US. It also lets those with interference-prone devices, such as wireless microphone users, register the airwaves they consider off-limits to white space technology. The clearance won’t have much immediate effect when very few Americans are using the spectrum, but it’s a step forward for rural broadband rollouts and other situations where long-range, unlicensed wireless comes in handy.

Filed under: Wireless, Networking, Google

Comments

Via: SlashGear

Source: FCC, Google

If worthy, Google will lend you its 42lb, 15-camera backpack for an adventure

If you’re an off-the-grid backpacker who dreams of sharing remote destinations with the world, this is your lucky day. This week, Google released a video and an application asking for individuals who can help further its Google Maps coverage:

“If you’re a tourism board, non-profit, university, research organization, or other third party who can gain access and help collect imagery of hard to reach places, you can apply to borrow the Trekker and help map the world.”

The Trekker is one of several pieces of equipment Google officially utilizes for Street View imagery (others include the well-known cars and things like trikes, trolleys, and snowmobiles). This piece of equipment was announced in July 2012 but then showcased to the world through Grand Canyon imagery in January. The Trekker is, at its core, a 42-pound backpack that allows a wearer to access areas where bulkier camera rigs can’t navigate. Google describes it with a little more detail:

Read 3 remaining paragraphs | Comments

Why Facebook Needs Trending Links

facebook-logo

Facebook is not working on an
RSS product, we hear, but it still has a huge and truly
social opportunity in news discovery. Facebook could turn what
links we share with friends into an automatic Digg for the world.
Over a billion people are on Facebook, and many share links to news
stories and offsite content along with their commentary. Yet rather
than post publicly like on Twitter, most posts are shared
semi-privately with friends and acquaintances. Right now there’s no
way for people to gleam the collective opinion of Facebook users on
what’s important. Only Facebook’s algorithms sees what the most
popular links and words are across the entire social network. If
Facebook took data on what people shared and used it in a
privacy-safe, anonymous, aggregate form, it could create a list of
the world’s most popular web pages at any given moment.
Conveniently linked to from the Facebook home page and mobile app,
the list could become an an informative and addictive window it our
collective consciousness.

Coding On The Shoulders Of
Giants

Reddit

There are places to
get a peek into what the world is sharing or interested in today,
but none with Facebook’s data set or mainstream user base. Reddit
is amazing. It’s a wildly diverse community of people picking the
day’s most important content across a near-limitless array of
categories. Their votes surface what’s most interesting, and their
voices are arranged into intelligible threads and conversations.
It’s threaded design is so good, in fact, that I think we’ll see
other less-formatted comment systems move towards Reddit’s style
with time.

You could argue whether it’s an advantage or disadvantage,
but Reddit is based on active submissions. For something to appear
on Reddit, someone must have the initiative and take the time to
purposefully post it. Once there, it’s only the Redditors who vote
and comment that determine a post’s rank. That makes what tops
Reddit’s homepage more of a reflection of the Reddit community than
the web as a whole. Sure, there are R/’s for everyone, but as a
whole, Reddit carries a bit of a proudly nerdy attitude mixed with
doses of skepticism and humor. Facebook’s opportunity comes from
the potential to scan everything shared on it and use a wider, more
mainstream definition of popularity to rank a list of what’s
interesting. No one would have to actively vet the list. It would
simply evolve organically based on how frequently things were
shared on Facebook, and maybe how many clicks, likes, and comments
they received. Offering lists by country or international region
could make sure the content is somewhat localized.

Twitter Trending Topics

The Facebook news
discovery experience I’m imagining shares some similarities with
Twitter’s Trending Topics. It too doesn’t have to be actively
vetted by users. People just go about their days tweeting, and
popular words and hashtags bubble to the top of the list. But do
you find yourself addicted to checking Twitter’s trending topics?
No. At least I sure don’t. They can be briefly shocking or amusing
but they rarely teach me much or spur me to click.

Twitter Trends don’t even have their
own web page. They’re just stuck on the left rail of Twitter’s home
page. On mobile they’re lumped into the Discover tab. In what I see
as their critical shortcoming, they have no context. No way to
understand why they’re being shared. Clicking them simply opens a
search for that word or hashtag, which can produce results that are
a mess, tough to decipher, and don’t provide any definitive answer
to what the trend is about. For obvious things like sporting events
and huge international news, these streams can offer a fascinating
insight into what the world is thinking. But even a Google search
couldn’t quickly tell me that #FOTunis referred to the Freedom
Online conference in Tunis, the capital of Tunisia. Organizing news
discovery around individual words or short phrases doesn’t seem
very efficient or easy…at least not with this design or without
context. If Facebook centered a news discovery product around
links, it could make it much clearer what people are discussing.
Links typically come with some combination of a headline, a photo,
and some text that can be used as a blurb. All Facebook would need
to do is show a list of links with this info, just like it does
when you websites to the news feed, and people could get the pulse
of the planet in a quick skim. While we’re on the topic of Facebook
hashtags, signs indicate the company will eventually create a list
of trending hashtags. Facebook
launched hashtags, similar to Twitter’s, earlier this
month, and on Thursday launched Related
Hashtags, which displays other tags frequently added to
the same post as a hashtag you’ve searched for or clicked on. I
believe Facebook is rolling the hashtags product out slowly so it
can learn to slice and dice the data in order to create a trending
hashtags product.

FaceDigg

Let’s be clear. A
Facebook news reading product wouldn’t replace Reddit or Twitter,
or necessarily even compete with them directly. But it could take
the theme of surfacing what people care about, make it less
subjective, and house it in an easy to use and accessible design. I
personally think I would visit this “Facebook Trends” page
frequently. Whenever I read through my news feed and started
getting bored, I could click to it for inspiration. I’d skim
through it, clicking through to different links and then going back
to Trends page for more. If it had lists based on geography, or a
personalized list that tuned itself to my behavior, interests, and
what people similar to me enjoy, I might visit even more. From Digg to Reddit to 9Gag to
Techmeme, great lists of trending content have proven addictive.
Yet there hasn’t been one with a truly mainstream focus. If
Facebook nailed this, it could generate a ton of traffic. I think
some people would click to refresh it and see what’s happening in
the world often — almost as often as they read the news feed for
content from their friends. The two could be seen as parallel
pillars of information — that which is interesting specifically to
you, and that which is interesting to everyone. Private and public.
Subjective and objective. A Facebook trending links section could
also spark high quality conversations within Facebook. If it shows
me something that resonates with me, I might not just click, but
share and talk about it with my friends. Ideally, if friends had
already shared it, I’d see that and the conversations that followed
in-line on the trends list. Facebook already has a nifty way of
doing this in the most recent design of the news feed. It shows a
stack of profile pictures next to a shared link and you can hover
over each to see how that friend described the content and what
their friends replied. Using that design for Trending Links my
friends had already shared could be a great alternative to one
long, messy comment thread of strangers. If you’re thinking “I
don’t need this. My friends already share great links and clue me
in to what’s happening in the world”, you’re lucky, and you’re
probably in the minority. Remember that the average user had around
180 to 250 friends last I heard. I worry that great swaths of
Facebook’s user base, especially in emerging markets and countries
where the service bloomed later, are missing out on one of the
great joys of the social web — the instant, collective
conversation surrounding the day’s news, tragedies, and triumphs.
It would just take one person perusing Facebook Trends to enlighten
an entire social cluster. Since there aren’t real character limits
on posts, and comment threads are clearly displayed, people would
have plenty of room to voice dissenting opinions about the world’s
most popular links. In that way, Facebook’s format and the way it
diverges from Twitter could keep it from becoming an echo chamber.
In fact, the aggregated “5 friends shared this link” design makes
it quick to view a variety of perspectives on a piece of content.
With any discovery medium comes opportunities to monetize through
sponsored placement. Brands could pay to have their links inserted
within the list of trending links. This could become a premier
channel for content marketing. Traditional ads might not work
there, but links to branded content or apps, fun marketing stunts,
or contests could do well when not jammed into the news feed where
they don’t quite fit with organic content from friends. Top-tier
advertisers have been pushing Facebook for ways to reach large
audiences all at once, and this could be the ticket. If Facebook
wants to house our digital lives, it can’t just be about who we are
and what we’ve done. It must also encompass what we think, and to
get us to volunteer our thoughts, it should strive to inform us,
inspire us, and seed our discussions with friends by surfacing
what’s popular around the globe. [Image Credit: Brian
Shaler
]

Why retail console games have never been cheaper, historically

Flickr user: npzo

Recently, we took a look at the history of how much various video game consoles have cost at launch. Research shows that the upcoming consoles from Sony and Microsoft look much more historically competitive once inflation is taken into account (and once expected price drops are extrapolated). But the cost of the hardware is actually not the most significant portion of what you’ll spend on a console over its lifetime. The price of software matters just as much if not more (and these days there are factors like online service and accessory costs). In light, a few readers have asked us to examine just how the prices for games have changed over the years.

This is trickier than it first seems. Console makers don’t set a uniform suggested retail prices for every video game released in a given year. Software prices can vary based on publisher, genre, system, format, and more. Game prices are also often reduced quite quickly as a game gets older, so bargain-basement clearance titles can complicate things.

To try and account for these issues, we decided to look at a representative “basket” of games in a variety of genres for each year for which we had reliable data (loosely defined genre list: Action, adventure, fighting/brawler, racing, RPG, shooter, sports, and other). For each genre, we determined a general range of prices by looking for the most expensive and least expensive games we could find with documentary evidence of a contemporaneous advertised price. We then took the average of the high price and the low price for each “basket” to come up with a general range of game prices for each year. To limit the effects of clearance items, we limited our data to games that were being advertised within a year of their original release date (to see the raw data behind our analysis, as well as sources for prices, check out this Google spreadsheet).

Read 13 remaining paragraphs | Comments

Japanese earthquake literally made waves in Norway

Aurlandsfjord, Norway, where seiches were observed.
Grisha Levit

During an earthquake, it’s the area near the epicenter that sustains the greatest damage, though shaking can sometimes be perceptible at impressive distances. But the seismic energy released by the earthquake actually travels around the globe—several times, in fact—as the planet, in essence, rings like a struck bell. The bigger the earthquake, the louder it rings. And the magnitude 9.0 quake that struck just off the coast of Japan on March 11, 2011 was very big, indeed.

Scientific instruments like seismometers are sensitive enough to pick up seismic waves from distant earthquakes, even on their second or third trip around the planet. (Satellites have even detected the accompanying atmospheric waves.) It doesn’t always take super-precise measurements to know something is happening, however. A groundwater monitoring well in Virginia made the passage of seismic waves from the 2011 Tohoku earthquake quite clear in the form of a rapid two foot rise in water level.

While the tsunami that accompanied the earthquake in Japan was devastating, waves of a very different sort were spawned far away—in the fjords of Norway. A number of witnesses noticed the strange waves, occurring as they did on a calm morning when the fjord waters were otherwise smooth. As some managed to capture on video, the water swelled and ebbed by as much as five feet once per minute or so for several hours—starting about half an hour after the earthquake in Japan.

Read 9 remaining paragraphs | Comments

The After Math: Microsoft fits new Windows, Sony pushes the limits of a smartphone screen

Welcome to The After Math, where we attempt to summarize this week’s tech news through numbers, decimal places and percentages.

The After Math Microsoft fits new Windows, Sony pushes the limits of a smartphone screen

In recent weeks, we’ve covered BlackBerry, Google, Nokia, Apple, Sony and (at least gaming-wise) Microsoft, but this week, the Redmond company returned to dominate tech news, showcasing a new version of Windows 8 (and RT) at its annual Build conference. It’s tried to fix some of the operating system’s early criticisms and make it all a bit more accessible. They even threw in a Start button — of sorts. Meanwhile, Sony set jacket pockets quivering, announcing its new 6.4-inch smartphone (that’s not a tablet), replete with arguably the most powerful mobile processor out there. For a numerical breakdown of the week’s news, follow us after the break.

Filed under: Cellphones, Software, Sony, Microsoft

Comments

Why Obama Was Never Going To Be A Civil Liberties Champion

images

Barack Obama was never going to be a champion of civil liberties; he leads a growing sect of the Democratic party that prioritizes the collective good and mass innovation over individualism. This coercively inclusive political philosophy feels that every citizen, business, country, and institution has an obligation to contribute to the common good.

Obama will mandate universal health coverage but let private insurers run the programs, he’ll maintain middle-eastern wars but work with Russia on global nuclear disarmament, expand the education budget but give more resources to union-less charter schools, and build online tools to monitor stimulus spending while collecting phone records of every American.

The philosophy is a mix of fierce anti-individualism and anti-authoritarianism–what political scientists call “communitarianism”. “In the my wildest dreams, during eighteen years of championing communitarianism, I did not expect a presidential candidate to be as strongly identified with this political philosophy as Obama is,” gushed George Washington Professor of philosophy, Amatai Etzioni.

Established liberal institutions have always worried that communitarian optimism was blind to the damage government agencies and big business could exact on society’s most vulnerable. Since 9/11, the most prominent communitarians have generally defended mass NSA and FBI spying against the fears of their liberal cousins.

“The key issue is not if certain powers-for example, the ability to decrypt e-mail-should or should not be available to public authorities, but whether or not these powers are used legitimately and whether mechanisms are in place to ensure such usage,” said Etzioni, in regard to a series of high-profile debates with former ACLU president, Nadeen Strossen, during the post-9/11 ramp-up to the Patriot Act.

Communitarians are generally ok with privacy invasion, so long as there is sufficient public oversight to make sure the government is doing its job.

“For communitarians, public safety generally comes first,” University of Southern California, Professor, Brian Rathbun, writes to me in an email. ”A key element of Obama’s personal philosophy is on the merits of cooperation, that collective enterprises yield greater gains that individual action.”

The obsession with mass collaboration largely explains much of Obama’s failing civil liberties record, across the board, as he’s also been an unqualified proponent of experimental charters, which reject the job stability of traditional schools.

Obama is not the only rabid communitarian. New York Mayor Michael Bloomberg has distinguished himself as a collaboration and drone-happy public servant: he’s ok blanketing the city with law-enforcement drones and reportedly threatened to “F***cking destroy” the taxi industry over issues with Uber.

Bloomberg, first and foremost, wants innovation over protectionist policies. During a press conference, Bloomberg told me that taxi unions were part of “the old entrenched industries that try to use the shield of regulation…to protect them from the kind of competition that benefits society”.

Closing the door on civil liberties, however, has opened up some exciting opportunities for innovative policy making. For instance, in ObamaCare included a ”waiver for state innovation” that exempts any state from the new healthcare law, so long as states can can cover everyone without increasing costs. In essence, it’s like Google’s 20% time. Everyone has to innovate and hopes that someone will come up with the best solution to healthcare. It’s radically optimistic about the power of individual creativity, but refuses to allow citizens or states to be spectators (hence, the core of the name, “community”).

Notably, it’s quite easy to spot communitarians based on who Silicon Valley’s deep-pocked donors are supporting. While the Bay Area gave more to Obama to than either Wall Street (New York) or Hollywood (Los Angeles), they only give to a few candidates.

Newark Mayor and Senate candidate, CoryBooker, is a Silicon Valley favorite and has focused the $100 Million education donation he got from Mark Zuckerberg on controversial charter schools. It should no surprise that the union-less, privacy-skittish social network is itself a communitarian totem.

Facebook has aggressively fought FTC regulations to deny it’s ability to automatically enroll users in new products (“opt-in”). Facebook has argued that if users had been required to opt-in to the newsfeed the initial privacy hysteria would have blunted adoption of tool that is now a stable of social networking. Just like in a community, participation and sharing are the default assumptions; privacy and isolation is left as an inconvenient anti-social option.

Together with their friends in Silicon Valley, communitarians are becoming a dominate force in society. To the extent that readers optimistically believe that cooperation between foreign governments, big business, and everyday citizens can yield collective prosperity, the growing power of the communitarian Democratic is a welcome change. For those who fear that we live in a zero-sum world between the powerful and weak, communitarians are blindly leading us into an unequal, rights-free society.

If you’d like to learn more, read my full OpEd on The Daily Beast.

How a 30-year-old lawyer exposed NSA mass surveillance of Americans—in 1975

The NSA at night—always watching.
NSA

US intelligence agencies have sprung so many leaks over the last few years—black sites, rendition, drone strikes, secret fiber taps, dragnet phone record surveillance, Internet metadata collection, PRISM, etc, etc—that it can be difficult to remember just how truly difficult operations like the NSA have been to penetrate historically. Critics today charge that the US surveillance state has become a self-perpetuating, insular leviathan that essentially makes its own rules under minimal oversight. Back in 1975, however, the situation was likely even worse. The NSA literally “never before had an oversight relationship with the Congress.” Creating that relationship fell to an unlikely man: 30 year old lawyer L. Britt Snider, who knew almost nothing about foreign intelligence.

Snider was offered a staff position on the Church Committee, set up by Congress in 1975 to function as a sort of Watergate-style inquiry. This initiative focused on CIA subversion of foreign governments and spying on American citizens, recently revealed in the New York Times by noted investigative reporter Seymour Hersch. Congressional “oversight” of intelligence agencies was, at the time, nearly useless, as the Senate’s official history of the Church Committee notes:

In 1973, CIA Director James Schlesinger told Senate Armed Services Chairman John Stennis that he wished to brief him on a major upcoming operation. “No, no my boy,” responded Senator Stennis. “Don’t tell me. Just go ahead and do it, but I don’t want to know.” Similarly, when Senate Foreign Relations Committee Chairman J.W. Fulbright was told of the CIA subversion of the Allende government in Chile, he responded, “I don’t approve of intervention in other people’s elections, but it has been a long-continued practice.”

The committee was initially given nine months and 150 staffers to conduct its work. Snider was tasked with expanding the committee’s inquiry to the NSA, which was so opaque that no one in Congress could even come up with an org chart for the Fort Meade-based operation. Years later, Snider became the CIA’s inspector general. In late 1999 he wrote up his memories of that early NSA investigation and how it helped to reveal a massive program of NSA spying on telegrams—including those sent by US citizens. It turned out that the telegram companies had secretly agreed to the scheme out of a sense of “patriotic duty.” Sounds a bit like the NSA of today, no?

Read 30 remaining paragraphs | Comments

The Ephemeralnet

vanish

In the aftermath of the dot-com crash, a new era for the web began to take hold – a turning point whose seismic shift was hyped under the moniker “Web 2.0.” The concept referred to the web becoming a platform, a home for services whose popularity grew through network effects, user-generated content and collaboration. Blogging, social media sites, wikis, mashups, and more reflected a changing consciousness among the Internet’s denizens – one which Tim O’Reilly, whose Web 2.0 conferences helped solidify the term as a part of our everyday lexicon, once described as a “collective intelligence, turning the web into a kind of global brain.”

Since that time, because of humans’ intrinsic need to apply a structure to amorphous things to give them a semblance of order – things like the web, for example – there have been many attempts to define what “Web 3.0″ might be. At one point, the assumption was that Web 3.0 woud be the “semantic web.” A place where machine-readable metadata is applied, allowing the web and the services that live upon it to understand the content and the links between the people, places, and things that fill its servers.

To some extent, the semantic web did arrive in things like Google’s Knowledge Graph, an upgrade to Google Search whose underpinnings include a database filled with millions of objects and billions of connections between them. But semantic technology never became so widespread so as to define a new era of the web itself.

Meanwhile, others claimed Web 3.0 would be the shift to mobile devices, the rise of the “Internet of Things,” or would emerge from web services growing more personalized to their users – Google’s predictive search service “Google Now” could be seen as one example of this, perhaps.

But none of these got to win the Web 3.0 branding, either.

So what will come next?

Will another notable turning point for the web as we know it ever evolve?

Yes, of course, and it’s happening now.

It’s harder to spot because this time around because it’s not growing out of the ashes of a largely desiccated web as with Web 2.0, which blossomed following the dot-com era’s end. Instead, the new web is growing up alongside the web of today. It could, one day, take over, but that remains to be seen.

And we don’t have to call it Web 3.0. That’s a bit simplistic. But it does deserve recognition.

A Rebellion

In retrospect, what Web 2.0 meant to the vast lot of the web’s users was a large number of lightweight services – software perpetually in beta that ran online not out of a box. It harnessed the wisdom of the crowds and the willing contributions of user data, which, in the end, became the services’ value.

Facebook’s social graph and profile data, for instance, is now the product it sells to advertisers who target anonymized demographic groups based on things like age, education, location, and more. Wikipedia grew from the efforts of thousands who aggregated their time and knowledge to building an online encyclopedia. Even the “blogosphere” is a Web 2.0 product, one where a network of writers and publishers linked and commented, reblogged and shared.

But the web is not a static thing. It grows and shifts to reflect the society it serves.

For those who saw the web emerge in their lifetimes, the ability to publish and connect with a vast audience around the world is a marvel. To rediscover long-lost friends on social networks, or chat with someone on the other side of the globe, or share photos with your friends and family so easily, still amazes.

But today, a new group of web users is coming of age. They aren’t in awe of the connectivity and openness the web provides, that’s just the way it’s always been. And sometimes, they even kind of resent it. Barely able to remember a time when the web didn’t exist, this group has been forced to grow up online, living in public like the artists in the human terrarium under New York City once did in an art project-slash-eerie premonition of a future yet to come.

“In the not-so-distant future of life online, we will willingly trade our privacy for the connection and recognition we all deeply desire,” said Ondi Timoner, who documented this and other controversial human experiments in his 2009 film “We Live in Public.”

He also warned us of the dangers of living our lives exposed, with what now sounds like common sense:  ”the Internet, as wonderful as it is, is not an intimate medium. It’s just not,” he said. “If you want to keep something intimate and if you want to keep something sacred, you probably shouldn’t post it.”

But we did it anyway. We posted it. We liked it. We shared it. We hashtagged it.

And when we ran out of things to document about ourselves, we turned towards our children.

Now of age, those young digital natives whose lives we cataloged without their consent are rebelling. They’re discarding the values of the previous generation – those of their parents, the authoritarians – and defining new ones.

They don’t want open social networks, they want intimacy. They don’t believe every action has to be meaningful and permanent. The imagine the web as deletable.

The Rise Of The Ephemeralnet

The incredible growth of Snapchat, the “ephemeral” messaging service where pictures and videos are taken, shared, then discarded – allowed to become memories – is often pointed to as the key trend defining this new era, but that’s just wrong. It’s only one example.

Among its active users, Snapchat is engaging and addictive, and representative of an increased desire for privacy. However, it’s not the only service out there defining a different kind of experience. The global messaging market as a whole has given way to a fragmented collection containing dozens of similar services each with millions of users of their own. While these may not have the parlor trick of “disappearing” messages, they also represent a rebellion against the “one network to rule them all” concept.

These messaging apps are often used with a close set of friends or family members, where data shared remains fairly isolated and private, as opposed to publicized and findable on the larger web. It’s not about anonymity. It’s about a different type of community. One not cluttered by bosses or parents. One less searchable.

Even Twitter is returning to its SMS roots among these younger users, who revel in its semi-private nature. Twitter users can adopt pseudonyms, and you can’t surface tweets older than a week through Twitter search, which makes it feel like less of a permanent record, and a freedom to be “real” without consequence.

Meanwhile, on the youth-dominated social service Tumblr, users also don’t have to sign up with their “real” identities. This allows them to explore and experiment with new identities and sub-cultures, the way young adults naturally do in the offline world.

And on a growing mobile social network for sharing secrets, Whisper, which this month saw over 2.1 billion pageviews, users can express their innermost feelings – even those they’re ashamed or scared of – and become connected with a community for support, or, in the case of darker impulses, with actual help. And all this before they identify themselves by name.

While some confuse the “Ephemeralnet” with the so-called “SnapchatNet,” in reality, it’s not only a new way to socialize online, it’s a new way to think about everything. You can see the trend also in the rise of the (somewhat) anonymous and untraceable digital currency Bitcoin. Unlike traditional transactions, Bitcoin is decentralized and doesn’t require banks or governmental oversight or involvement. And though it’s not entirely anonymous, there are already efforts, like Zerocoin, working to change that.

There are also efforts at making other forms of communication more ephemeral, too. Phone calls become more private through apps like Burner, SMS secure through apps like Gryphn or Seecrypt, and internal business communications unarchivable through apps like O.T.R.

As we head into the post-PRISM era, there’s even a chance that this trend towards privacy will become further entrenched. Take for instance, a little-known service like anonymous search engine DuckDuckGo saw traffic spike by 50 percent in just over a week after the PRISM reveal. If it can now find traction for its online service and accompanying mobile apps based not just on PRISM fears, but on connecting with this larger trend of impermanence, then it could even have a shot at siphoning away a big enough handful of users to sustain its business.

But at the end of the day, the Ephemeralnet may never get to become as defining a trend as Web 2.0 once was. Though it may find adoption beyond the demographics of its youngest participants, it will continue to share the web with the services that preceded it – services too big, too habitual, and too lucrative, to die off entirely.

But in the meantime, a new social norm could still be established. One where those who play for the cameras are outed and ostracized; where we value human connections enabled by technology over meaningless “social” scores; and where we care more about our relationships, and less about the number of likes and shares we have.

Image credits: giphy; lead – unknown via Weheartit

Review: The Hisense Sero 7 Pro is a Nexus 7 clone for $50 less

The Hisense Sero 7 Pro (left) and the Nexus 7 (right): peas in a pod.
Andrew Cunningham

A few weeks ago we reviewed Hisense’s Sero 7 Lite, a new budget Android tablet that isn’t very good until you consider that it costs $99. The tablet that Hisense really wants you to see, though, is the $149 Sero 7 Pro. This tablet runs a quad-core Nvidia Tegra 3 SoC, has a 1280×800 7-inch screen, 1GB of RAM, and runs Android 4.2. If these specs sound familiar to you, it’s probably because they’re identical to those of the Nexus 7 tablet that Google and Asus will sell you for $199.

Using the Sero 7 Pro is very similar to using the Nexus 7 with Android 4.2 installed, so for this review we’ll be focusing on a side-by-side comparison with the tablet that Google has been selling for about a year now. If you’re buying a 7-inch Android tablet today, should you stick with the Nexus or save yourself the $50?

What does it share with the Nexus 7?

Specs at a glance: Hisense Sero 7 Pro
Screen 1280×800 7″ (216 ppi) IPS touchscreen
OS Android 4.2.1 “Jelly Bean”
CPU 1.2GHz Nvidia Tegra 3 (1.3GHz in single-core mode)
RAM 1GB
GPU Nvidia Tegra 3
Storage 8GB NAND flash (expandable via microSD)
Networking 802.11a/b/g/n, Bluetooth 3.0, NFC, GPS
Ports Micro USB, mini HDMI, headphones, microSD card
Size 7.87″ × 4.95″ × 0.43″ (199.9 x 125.7 x 10.9 mm)
Weight 0.79 lbs (358 g)
Battery 4000 mAh
Starting price $149
Other perks 2MP front camera, 5MP rear camera, power adapter

The screen, the SoC, and the RAM are probably the three biggest hardware components that will affect your tablet experience, and the Nexus 7 and the Sero 7 Pro share them all: a five-point 1280×800 IPS touchscreen, a quad-core Tegra 3 SoC that can run at up to 1.3GHz, and 1GB of RAM. The Sero 7 also includes the same 8GB of internal storage as the original entry-level Nexus 7, but Google’s more recent $199 model has since been bumped to 16GB of storage.

Read 16 remaining paragraphs | Comments