Pages

Showing posts with label 100. Show all posts
Showing posts with label 100. Show all posts

Monday, November 21, 2016

Download a set of 100 high resolution wallpapers for your desktop

High resolution wallpapers for desktop
This is a set of high resolution wallpapers including the above wallpapers and more.
Click to download the wallpapers.
Read More..

Monday, September 5, 2016

Hacking Snapchats people verification in less than 100 lines

I woke up this morning and saw this article detailing Snapchats new people verification system.

(Credit: Screenshot by Lance Whitney/CNET)
You may have seen it but if not I will summarize it for you. They basically have you choose each image with the Snapchat ghost to prove you are a person. It is kind of like a less annoying captcha.

The problem with this is that the Snapchat ghost is very particular. You could even call it a template. For those of you familiar with template matching (what they are asking you to do to verify your humanity), it is one of the easier tasks in computer vision.

This is an incredibly bad way to verify someone is a person because it is such an easy problem for a computer to solve.

After I read this, I spent around ~30 minutes writing up some code in order to make a computer do this. Now there are many ways of solving this problem, HoG probably would have been best or even color thresholding and PCA for blob recognition but it would take more time and Im lazy (read: efficient). I ended up using OpenCV and going with simple thresholding, SURF keypoints and FLANN matching with a uniqueness test to determine that multiple keypoints in the training image werent being singularly matched in the testing image.

First, I extract the different images from the slide above, then I threshold them and the ghost template to find objects that are that color. Next, I extract feature points and descriptors from the test image and the template using SURF and match them using FLANN. I only use the "best" matches using a distance metric and then check all the matches for uniqueness to verify one feature in the template isnt matching most of the test features. If the uniqueness is high enough and enough features are found, we call it a ghost.

With very little effort, my code was able to "find the ghost" in the above example with 100% accuracy. Im not saying it is perfect, far from it. Im just saying that if it takes someone less than an hour to train a computer to break an example of your human verification system, you are doing something wrong. There are a ton of ways to do this using computer vision, all of them quick and effiective. Its a numbers game with computers and Snapchats verification system is losing.

Below is an example of the output of the code:
Found a Ghost!

And you can find the code on my github here: https://github.com/StevenHickson/FindTheGhost


Note: This just relates to Snapchats new captcha system. The reason they implemented this was because of the past hacks by people who put way more time and thought into a harder problem. They can be found here:
Graham Smith - https://twitter.com/neuegram - http://neuebits.com/
Adam Caudill - http://adamcaudill.com/2012/12/31/revisiting-snapchat-api-and-security/
Gibson Security - http://gibsonsec.org/

Consider donating to further my tinkering.

Places you can find me
Read More..

Monday, May 9, 2016

Fast Accurate Detection of 100 000 Object Classes on a Single Machine



Humans can distinguish among approximately 10,000 relatively high-level visual categories, but we can discriminate among a much larger set of visual stimuli referred to as features. These features might correspond to object parts, animal limbs, architectural details, landmarks, and other visual patterns we don’t have names for, and it is this larger collection of features we use as a basis with which to reconstruct and explain our day-to-day visual experience. Such features provide the components for more complicated visual stimuli and establish a context essential for us to resolve ambiguous scenes.

Contrary to current practice in computer vision, the explanatory context required to resolve a visual detail may not be entirely local. A flash of red bobbing along the ground might be a child’s toy in the context of a playground or a rooster in the context of a farmyard. It would be useful to have a large number of feature detectors capable of signaling the presence of such features, including detectors for sandboxes, swings, slides, cows, chickens, sheep and farm machinery necessary to establish the context for distinguishing between these two possibilities.

This year’s winner of the CVPR Best Paper Award, co-authored by Googlers Tom Dean, Mark Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan and Jay Yagnik, describes technology that will enable computer vision systems to extract the sort of semantically rich contextual information required to recognize visual categories even when a close examination of the pixels spanning the object in question might not be sufficient for identification in the absence of such contextual clues. Specifically, we consider a basic operation in computer vision that involves determining for each location in an image the degree to which a particular feature is likely to be present in the image at that particular location.

This so-called convolution operator is one of the key operations used in computer vision and, more broadly, all of signal processing. Unfortunately, it is computationally expensive and hence researchers use it sparingly or employ exotic SIMD hardware like GPUs and FPGAs to mitigate the computational cost. We turn things on their head by showing how one can use fast table lookup — a method called hashing — to trade time for space, replacing the computationally-expensive inner loop of the convolution operator — a sequence of multiplications and additions — required for performing millions of convolutions with a single table lookup.

We demonstrate the advantages of our approach by scaling object detection from the current state of the art involving several hundred or at most a few thousand of object categories to 100,000 categories requiring what would amount to more than a million convolutions. Moreover, our demonstration was carried out on a single commodity computer requiring only a few seconds for each image. The basic technology is used in several pieces of Google infrastructure and can be applied to problems outside of computer vision such as auditory signal processing.

On Wednesday, June 26, the Google engineers responsible for the research were awarded Best Paper at a ceremony at the IEEE Conference on Computer Vision and Pattern Recognition held in Portland Oregon. The full paper can be found here.
Read More..

Wednesday, April 6, 2016

A Million Random Digits with 100 000 Normal Deviates

"A Million Random Digits with 100,000 Normal Deviates" is an unlikely title for a best seller but this book has been in print since 1955. If youre a mathematician youll know that its actually very hard to generate truly random numbers and since random numbers are widely used by statisticians, physicists, poll takers, market analysts, lottery administrators, and others this publication was designed to solve that problem. So, if you are in the need of a series of random numbers this book is for you.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Turn off or edit this Recipe

Read More..