Latest News

Featured
Featured

Gallery

Technology

Video

Games

Recent Posts

Monday, August 15, 2016

Twitter has a new kind of ad: Stickers


A few months back, Twitter rolled out a new feature that allowed users to bedazzle the photos they share with cartoon stickers. Now it’s hoping to make a little money from the experience.
Twitter announced promoted stickers on Monday, a way for big brands to pay so that their stickers show up at the top of the pile. Twitter is launching the new ad type with Pepsi, which make a lot of sense when you consider that Pepsi CFO Hugh Johnston recently joined Twitter’s board.
Selling stickers is a way for Twitter to try and attract more brand-marketing dollars, an advertising cohort that has always been Twitter’s bread and butter, but might not be spending like it has in the past. Twitter reported disappointing earnings last quarter, and is looking for new revenue streams to make up for the fact that it’s not adding many new users.
Which brings us to stickers, something Twitter could easily package with other ad types for big brand campaigns. You’ll start seeing the promoted stickers Monday.

Sunday, August 14, 2016

I want a companion ship

I want a companion ship
AdChoices
It’s time: We need a drone with automated follow features that not only includes a camera for cataloging your many feats, but also has an on-board voice assistant to and a cellular connection to get stuff done for you while you’re out and about.
It’s time for friendly flyers; it’s time we each had a companion ship.
I don’t know who gets this done – maybe Parrot, since it seems in the same spirit of fun that gave rise to products like the Rolling Sumo and Jumping Spider dronesthe French company makes. I don’t really care who does it, either – let’s just get these things made so we can have airborne robot pals.
Listen, this isn’t a big ask. All the parts are in place, since we already have multiple companies building drones that follow you autonomously. Some are designed for extreme sports, and some are just pocket drones basically made for giggles. But object tracking in vision systems isn’t new, nor is tying that to an aerial drone’s flight and navigation abilities.
staaker-01I first talked about the possibility of personal companion drones with a drone-oriented entrepreneur three years ago, and since then, we’ve come a long way. Autonomy systems are shrinking in terms of required physical components, and Amazon and others are now actually trialling drone delivery. Commercial use of drones in fields like agriculture and construction site management are also far more commonplace.
Okay, the fly-and-follow component is probably the easy part. What about a voice assistant? Look, I got this for you: Alexa.
Amazon’s voice-based assistant is flexible enough to do a lot for you, and extensible enough that building in any tasks not currently present would be relatively easy. I envision using a companion drone the way you would your home-bound Amazon Echo, anyway – grocery lists, timers, triggering actions in connected devices.
But because it’s not tethered to you, and literally flies, it can do more, too, including snapping an aerial selfie and posting it directly to Instagram, or heading to the corner store to pick up some gum. It could play with your kids, fly up and give you hyper-localized weather conditions, or help you see around corners on a late night walk home.
amazon-echoAlways-on connectivity, a microphone system sophisticated enough to hear your commands, and persistent follow functions mean big battery draws, which is the bound to be the main limiting factor here. Already, max flight times on most consumer drones doesn’t extend much longer than 20 minutes.
It’s probably the primary limiting factor to building better drone buddies, but a fix could come via frequent, universal charging mechanisms for short-term top ups. I’m thinking induction charging surfaces, placed throughout a person’s home and yard, and then eventually built into city infrastructure. This has a second advantage, since it would also provide more accessible charging options for smartphones that support wireless changing standards.
Another possible fix: Make the drones only semi-aerial. Bots that roll seem to be able to manage longer trips between power-ups, and can hold larger charges since they don’t have to leave terra firma. Maybe create a droid-like docking cradle, so your flying friend can occasionally land and roll around R2-D2 style.
SwagBot is an Australian ground-bound agriculture bot.
SwagBot is an Australian ground-bound agriculture bot.
This whole concept probably sounds dystopian to some – imagine a city in which everyone had a drone deployed very nearby. But right now, I’m mainly looking for someone to put the pieces together to see how this might work in limited use settings in private and semi-private quarters.
Mostly, I’d be interested to see how it impacts smartphone usage, and what it does in terms of cutting back on our time spent interfacing directly with our connected gadgets.
Amazon is probably the company doing the most right now to erase the interaction layer between people and their computers, through voice-based interfaces like the Alexa in the Amazon Echo, and their literal single-button solutions with the Dash product ordering gadget. It may seem like a weird way to get there, but a drone companion you can dismiss whenever you want could be how you replace even more traditionally smartphone-based activities with something less omnipresent.
In the end, what I want really are fully functional Star Wars droids. But drones might be the way forward, given the progress and investment in that tech, and the efforts already being made to give them greater autonomy, and integrate them into our existing city infrastructure. It’s an engineering challenge, too, that manages to successfully bring together a number of areas of intense current investment interest.
But really, it would just be very cool.

Young Brazilians are more excited for Pokémon Go than the Olympics


Forget beach volleyball, soccer or tennis, not to mention the steeplechase or discus. Pokemon Go is challenging the Olympics for most popular game among some young Brazilians.
Hundreds of them turned out in a Rio de Janeiro park on Saturday holding their mobile phones to hunt for virtual creatures in the hyper-reality game app that has become a craze in Brazil since its release two days before the Games.
“I went to a football game to see Brazil play Sweden, but after Pokemon Go started I lost interest,” said student Lourdes Drummond at the Quinta da Boa Vista park, once the gardens of the Brazilian royal family.
The blockbuster game developed by Niantic, in which Japan’s Nintendo Co has a large stake, uses augmented reality and GPS mapping to make animated characters appear in the real world. Players see creatures overlaid on the nearby landscape that they see through a mobile phone camera.
Brazil’s third largest mobile phone company Claro estimates that close to 2 million of its users have downloaded the game just in the Rio area since it was released on Aug. 3. An executive of the company owned by Carlos Slim’s America Movil said more than half of those users had been inside or near Olympic venues hunting for Pokemon.
Even athletes have been addicted to the game. Japanese gymnast Kohei Uchimura downloaded the app when he got to Brazil for pre-Games training before Pokemon Go was launched in the country. He ran up almost $5,000 in international phone charges.
That did not stop him winning two gold medals and becoming the first man to claim back-to-back all around titles in over 40 years, and only the fourth in history.
As Rio residents rode paddle boats on the lake of the Boa Vista park, youths explored the grounds seeking the Dragon Knight and other prized Pokemon to add to their collection. They huddled in the shade of the 19th Century royal palace to swap tips.
“There is no interest in the Olympics here, just how to get to the next stop where there are the most Pokemon,” said sociologist Joao Carlos Barssani, 31, himself joining the hunt.
When a boy shouted “I found one!” dozens sprinted after him in pursuit.
It may not be a physical sport, but the novelty of Pokemon Go is the mobility involved compared to traditional video games. You have to get up and go outside to search your city cellphone in hand to accumulate as many Pokemon as you can.
“Before I never left home. Now every time my mother wants me to do any shopping, I’m out the door,” said Rafael Moura Barros, an IT student who believes the game will help reduce obesity in Brazil.
Barssani said the game was changing the way Brazilian were using their urban space in cities long plagued with high crime rates. People are frequenting parks and squares that had been abandoned for fear of getting mugged, he said.
“It’s good to have lots of people around you, so your phone doesn’t get robbed,” said student Leonardo Perreira.
- Abdulsamad Aliyu
Via: First Post

Saturday, August 13, 2016

Universal Orlando Launching Spooky VR Experience This Halloween

Thanks, but I'll pass on this one; haunted houses and their irritating jump scares, after jump scares, after jump scares aren't this writer's cup of tea. Crazy psychological horror? Sure. Dude in mask and chainsaw jumping out of a mesh wall? Not so fun. And when you throw virtual reality into the mix, that's a double-dose of not fun; I'll be at the churro stand.
However, there are plenty of people who like a good scare and are intrigued by the premise of virtual reality. And for them, Universal Orlando has the ideal experience this Halloween season. Not to be outdone by other major theme parks that have been busy combining virtual reality into their roller coasters, Universal Orlando announced yesterday that it's launching a brand-new haunted virtual reality experience for attendees this fall.
"I am ecstatic to announce a cutting-edge, terrifying virtual reality experience is coming to Universal Orlando's Halloween Horror Nights. 'The Repository' seamlessly blends custom virtual reality technology with immersive real-life storytelling bringing guests into the next dimension of horror experiences," readsa blog post from TJ Mannarino, senior director of art & design at Universal Orlando.

Those participating in The Repository will be grouped together into teams of four, tasked with figuring out all the creepy paranormal elements of Universal Orlando's story. Judging by the pictures the theme park has released so far, it appears as if you'll just be donning big, bulky virtual reality headsets—no controllers—on your little spooky journey. And if that's the case, it's unclear whether the virtual experience is meant to be interactive or whether it's just something you'll be watching (and screaming at) while standing around in a giant room.
"This is one of the most electrifying Halloween Horror Nights' project I've ever worked on – primarily because it is the culmination of 8 years of experimental story-telling, drawing on feedback from our vast HHN audience to learn what thrills and chills them the most."
Perhaps the scariest part of The Repository is its price, though. You'll have to pay $50 per person on top of your pass for the park's Halloween Horror Nights admission in order to partake in the virtual reality experience. Tickets for the experience go on sale for the general public starting August 23, but annual passholders can start purchasing on August 16.
- Abdulsamad Aliyu
Via: pc mag

Hearing is like seeing for our brains and for machines

There is an array of neural net machine learning approaches that are simply more than just “deep.” In a time when neural networks are increasingly popular for advancing voice technologies and AI, it’s interesting that many of the current approaches were originally developed for image or video processing.
One of those methods, convolutional neural networks (CNNs), makes it easy to see why image-processing neural nets are strikingly similar to the way our brains process audio stimuli. CNNs, therefore, nicely illuminate that our audio and visual processes are connected in more ways than one.

What you need to know about CNNs

As human beings, we recognize a face or an object regardless of where in our visual field (or in a picture) it appears. When you try to model that capability in a machine, by teaching it how to search for visual features (like edges or curves at a lower level of a neural network or eyes and ears at a higher level, in the example of face recognition), you typically do so locally, as all relevant pixels are close to each other. In human visual perception, this is reflected by the fact that a cluster of neurons is focused on a small receptive field, which is part of the much larger entire visual field.
Because you don’t know where the relevant features will appear, you have to scan the entire visual field, either sequentially, sliding your small receptive field as a window over it (top to bottom and left to right) or have multiple smaller receptive fields (clusters of neurons) that each focus on (overlapping) small parts of the input.
The latter is what CNNs do. Together, these receptive fields cover the entire input and are called “convolutions.” Higher levels of the CNNs then condense the information coming from the individual lower-level convolutions and abstract away from the specific location, asshown below.
image001
Source: Wikipedia
So, if you search for faces or objects in your photos using Google Photos, or the equivalent new feature in Apple’s iOS 10, you can assume that CNNs are at use for identifying the relevant candidate locations in pictures where the requested face or object might be shown.
image002
Source: Region-based Convolutional Networks for Accurate Object Detection and Segmentation by Ross Girshick, Jeff Donahue, Trevor Darrell and Jitendra Malik
But we have also found several applications of CNNs to speech and language.
CNNs can be applied to a raw speech signal in an end-to-end way (i.e. without manual definition of features). The CNNs look at the speech signal by unfolding an input field with time as one dimension and the energy distribution over the various frequencies as the second dimension into their “convolutions,” thereby learning automatically which frequency bands are most relevant for speech. The higher layers of the network are then used for the core task of speech recognition: finding phonemes and words in the speech signal.
It has been shown that brain areas designed for the processing of audio signals and speech can be used for visual tasks.
Once you have those words, the next example is “intent classification” in natural language understanding (NLU), or understanding from a user request what type of task the user wants to achieve (I covered in a recent blog post how the other aspect of NLU, named entity recognition, works). For example, in the command “Transfer money from my checking account to John Smith,” the intent would be “money_transfer.” The intent is typically signaled by a word or a group of words (usually local to each other), which can appear anywhere in the query.
So, in analogy to image recognition we need to search for a local feature by sliding a window over a temporal phenomenon (the utterance; looking at one word and its context at a time) rather than a spatial field. And this works very well: When we introduced CNNs for this task, they performed more than 10 percent more accurately than the previous technology.

Neighbors in the brain — and in the field

Why are CNNs successful at these tasks? A rather straightforward explanation could be that they just share characteristics with image processing; they are all of the “find something small in something bigger, and we don’t know where it might be” type. But there may be another slightly more interesting explanation, namely the fact that CNNs designed for visual tasks also work for speech-related tasks, which is a reflection of the fact that the brain uses very similar methods to process both visual and audio/speech stimuli.
Consider phenomena like synesthesia, or the “stimulation of one sensory or cognitive pathway lead[ing] to automatic, involuntary experiences in a second sensory or cognitive pathway.” For example, audio or speech stimuli can lead to a visual reaction. (I employ a mild version of this; for me, each day of the week, or rather the word describing the day, has a distinct color. Monday is dark red, Tuesday grey, Wednesday a darker grey and Thursday a lighter red and so on.) It is being interpreted as an indication that processing of audio and speech signals and optical processing have to be so-called “neighbors” in the brain somehow.
Similarly, it has been shown that brain areas designed for the processing of audio signals and speech can be used for visual tasks, such as people born with hearing impairments who can re-purpose the audio/speech area of their brains to process sign language. This probably means that the organization of brain cells (neurons) processing visual or audio signals must be very similar.
So, back to the practical applications of all of this. It is not too difficult to imagine yourself a couple of years from now sitting in a self-driving car and chatting with an automated assistant asking it to play your favorite music or to book a table at a restaurant. There will likely be several CNNs active “behind the scenes” to make this work:
  • One or several will be used by the LIDAR system (“Light detection and ranging,” a kind of radar based on lasers) used by the car to create a model of its surroundings, including obstacles and other cars.
  • Likely the car will also use cameras to detect and interpret traffic signs; chances are good that CNNs will be used for that, as well.
  • The automated assistant will use CNNs, both in its speech recognition and its natural language understanding components, to find phonemes and words in the speech signal and to find concepts in the stream of words, respectively.
And there will probably be others. Of course, all these tasks are performed by different CNNs, probably even in different control units. And each of the CNNs can only perform exactly the task for which it was trained, and none of the others (it would have to be retrained for that).
You could say the new developments in computer gaming helped to make the training of deep neural nets feasible.
However — and here it gets fascinating again — it has been shown that when CNNs are trained, they seem to acquire (especially on the lower layers) somewhat generic features (or concepts you could say) that carry over to other tasks. It is easy to see why this works for related domains; for example, in speech recognition you can take a CNN trained on one language (say English) and only re-train the top layers on another language (say German) and it will work well on that new language. Obviously the lower layers capture something that is common between multiple languages.
However — and I find it more surprising — it also has been tried to train CNNs across modalities, such as images of a scene and textual representations of that scene. The resulting networks can then be used to retrieve images based on text and vice versa. The authors conclude that at some level the CNNs learn features common to the modalities — without being told how to do so. Again, an interesting result demonstrating that seeing and dealing with language (text) must have a lot in common.

Neural net research and innovation has broad implications, and, as we have seen, progress in one application area (like image recognition) also helps advance things in other areas (like speech recognition and NLU). As we have also seen, this may be caused by the many parallels between audio and visual receptors in the human brain, or in general how the brain is organized.There also is another very practical ramification of the similarity of visual and audio/speech and language processing. We have found that graphical processing units (GPUs), which were developed for computer graphics (visual channel), can be employed to speed up machine learning tasks for speech and language, too. The reason is that the tasks that need to be handled again are similar in nature: applying relatively simple mathematical operations to lots of data points in parallel. So you could say the new developments in computer gaming helped to make the training of deep neural nets feasible.
As a result, we will continue to see fast progress of machine learning and AI in many fields, all benefiting from research efforts in many areas whose results can be shared. More specifically, it is no longer a surprise that CNNs, originally designed for vision, will ultimately help machines to listen and better understand us — something that’s crucial as we are continually propelled forward into this new era of human-machine interaction.
- Abdulsamad Aliyu
Via: Tech crunch

You Don't Need a Google+ Account to Write Play Store Reviews


Do you have a Google+ account? You might not even know. If you have one, do you ever actually use it? You might not care much about the social network, but you definitely had to at least have a Google+ account if you wanted to talk about your experiences with a particular app on the Google Play store.
That's right. Google's policy, up until now, has been that only those with connected Google+ accounts can post app reviews on Google Play. Google has since changed its mind and is starting to eliminate this requirement, but its move comes fairly quietly. A few users recently mentioned the change to Android Police, andTechCrunch has since confirmed with a Google spokesperson that a Google+ account is no longer needed to post reviews.

As part of the removal, Google has also nixed the "+1" voting that you could do for an app, which then indicated to your other Google+ friends that they should potentially check out whatever it is you liked. Users could also access a "people" section on the store that showed them all the different apps their Google+ friends recommended—for those of you with a ton of connections on Google's arguably failed social networking site.
Google appears to be rolling out the change across accounts, as some users have reported being able to post reviews on Google Play without needing a Google+ account, whereas others seem to still be stuck with the requirement. It's unclear just how long it might take Google to lift the restriction for everyone, but we figure it shouldn't take too long.
Google's changes shouldn't be that surprising, given that the company has been slowly distancing its major applications and services away from Google+. Google decoupled Google+ from YouTube in July of last year, and the company announced it was removing the Google+ requirement for Google Play Games in January. Instead, Android gamers were given new "Gamer IDs" to use when signing in, which they could link to a normal Gmail account instead of a Google+ account.
- Abdulsamad Aliyu
Via: PC mag

A prescription for preventing 3D printing piracy

In the year 2000, the music business was still strong. Record companies produced albums and shipped these physical objects to the stores that sold them. The internet was slowly becoming a system of mass consumption and distribution, but most consumers still purchased physical media. And while the record industry was aware of piracy online, the threat seemed minimal.
Then came Napster.
The music industry tried to stop this large-scale piracy by pursuing both the platforms and individual downloaders — including poor college students. But public opinion turned against the industry. After all, stealing digital music is intangible; it’s different than physically swiping actual CDs or tapes from brick-and-mortar stores. And while today many people access their music legally, it’s safe to say that music industry revenues have yet to recover.

What do pills have in common with MP3 downloads? More than you might think

3D printing, another revolutionary and disruptive technology, makes it cheap and easy to produce physical objects. And just as home copying has changed the copyright industries beyond recognition, 3D printing is poised to do the same to patent-based industries.
That means practically any business that makes physical objects will potentially face a Napster scenario. It may not happen to everyone, but as printer technologies improve and more materials — such as proteins, specialized polymers, metals and other chemicals — become available for printing, it will happen to many.
Take the pharmaceutical industry. Just like a musical recording, where most of the costs are incurred while producing the initial release (hiring the musicians, booking the studio, editing and the like), the bulk of the cost of developing a new pill goes into the front end: research and development, clinical trials and getting through the FDA approval. In fact, the raw ingredients may cost only a few pennies. And 3D printing — or digital manufacturing and distribution, as it’s also known — will make reproducing and delivering these pills, lawfully or unlawfully, much easier.

Houston, we have a (patent) problem

If people felt sorry for those poor college students being picked on by the big music industry, imagine how the public will feel about patients with inadequate insurance availing themselves of necessary but pirated prescriptions.
Aprecia may be the exception today, but it has proved that medicines can be printed.
Digitally manufactured pills are not far off. In 2015, the FDA approved the first 3D-printed pill, Spritam levetiracetam,an epilepsy drug manufactured byAprecia. The manufacturer claims that the 3D-printed pills are actually more effective, because their layered structure is more easily absorbed by the body, courtesy of the way 3D printers work. With 50 patents on its unique proprietary process, the company also claims that its IP is protected. Aprecia may be the exception today, but it has proved that medicines can be printed.

The pros and cons of a DIY maker culture

Despite the potential for threats to IP, 3D printing promises a wealth of benefits, like customization, both to consumers and, if they handle things right, manufacturers. With 3-D printed pharmaceutical medications, doses can be readily tailored to the needs of each patient, much like when pharmacists compound ingredients to make a custom pill for each individual. Likewise, prosthetic limbs are being created to fit each patient exactly.
That is not the only positive aspect of 3D printing. As printers get cheaper, they’ll no doubt begin to appear in pharmacies, which will print pills only as needed, cutting down on costly waste, spoilage and storage. That’s terrific news for the pharmaceutical industry, but there’s a darker side, too. In time, nearly anyone will be able to make the components for almost anything — patented or not, protected or not, dangerous or not.
If a 3D printer in every home sounds a bit far-fetched, a forecast by Gartner predicts that 3D printer shipments will more than double every year between 2016 and 2019, and notes that lower-end models, like those costing less than $2,500, are expected to grow to 40.7 percent of offerings by 2019. Gartner also predicts upwards of $100 billion loss a year in intellectual property worldwide because of 3D printing, because of not only pirating, but industry disruption.

Planning strategically now for a 3D-printed future

Much can be learned from how various industries have dealt with new technology. For the music industry, Napster met the effective end of legal exclusivity in copyrights. When distribution channels shifted and everyone with a computer could download and reproduce songs, copyright became hard to enforce. As soon as a record company sued one infringer, another popped up, like a nightmarish game of Whac-A-Mole. As a result, the value of the copyrights quickly degraded.
However, as we have also seen, not all IP or the products it protects will go down in value. Some things will become more valuable — and that’s where today’s executives should prepare.
Preparing for the ways that 3D printing will affect the market doesn’t always have to be costly.
There are numerous ways companies can proactively plan for the impact that 3D printing technology will have on their business. By investing in quality control and supply chain protection now, pharmaceutical companies, for example, can protect their patents and their market share by ensuring that their supply chain is pure, that their quality is guaranteed and that their customers are getting a safe medication, even when that reassurance costs more. This will appeal to consumers who want to be sure they are getting the real deal when it comes to medication — FDA-approved and quality controlled — not an illegitimate knock-off.
Preparing for the ways that 3D printing will affect the market doesn’t always have to be costly or go against the grain. For example, appliance or automobile manufacturers may encounter sales loss if third-parties 3D print replacement parts at a lower cost than those that are manufacturer-issued. Instead of fighting against this likelihood, manufacturers would do well to adopt the third-party business model of 3D printing spare parts to order. This reasoning can apply not only to heavy manufacturing, but also to medicine, bringing down the costs of so-called “orphan drugs,” those currently not manufactured because of their low potential for profit.

Even with the advent of 3D printing, we will still live in a world where legitimate businesses are engaged in the licensed manufacturing and distribution of copyrighted works and respect intellectual property. Patent owners could license manufacturing rights to legitimate 3D printing companies — the official 3D printer of Nike products, say — in which only authorized entities could make the official products. That way, patent owners would get income, 3D printing companies could develop new markets and buyers could get legitimate, quality-controlled products.  This could be done with branded and lower-cost white labelled options.Alternatively, manufactures could skirt similar issues by creating a design that requires a specific type of material, one not compatible with 3D technology. Materials and shapes that have to be mixed or joined in certain ways, for instance, do not easily lend themselves to digital manufacturing technology. Remember, though, that when financial incentives combine with evolving technologies, these types of plans may be short-lived.
Along with all the good that digital technology can bring, a major challenge to patents and other forms of intellectual property may be in the offing. Major industry disruption will soon follow. Only this time, with changes perhaps as long as five to 10 years down the line, manufacturers have time to prepare for it and pivot.
Videos

Recent Post