Showing posts with label techblog. Show all posts
Showing posts with label techblog. Show all posts

Friday, 1 March 2019

How big IT companies detecting fake news at its source

Detecting fake news at its source


Machine learning system aims to determine if an information outlet is accurate or biased.
Adam Conner-Simons | CSAIL October 4, 2018
Lately the fact-checking world has been in a bit of a crisis. Sites like Politifact and Snopes have traditionally focused on specific claims, which is admirable but tedious; by the time they’ve gotten through verifying or debunking a fact, there’s a good chance it’s already traveled across the globe and back again.
Social media companies have also had mixed results limiting the spread of propaganda and misinformation. Facebook plans to have 20,000 human moderators by the end of the year, and is putting significant resources into developing its own fake-news-detecting algorithms.
Researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the Qatar Computing Research Institute (QCRI) believe that the best approach is to focus not only on individual claims, but on the news sources themselves. Using this tack, they’ve demonstrated a new system that uses machine learning to determine if a source is accurate or politically biased.
“If a website has published fake news before, there’s a good chance they’ll do it again,” says postdoc Ramy Baly, the lead author on a new paper about the system. “By automatically scraping data about these sites, the hope is that our system can help figure out which ones are likely to do it in the first place.”
Baly says the system needs only about 150 articles to reliably detect if a news source can be trusted — meaning that an approach like theirs could be used to help stamp out new fake-news outlets before the stories spread too widely.
The system is a collaboration between computer scientists at MIT CSAIL and QCRI, which is part of the Hamad Bin Khalifa University in Qatar. Researchers first took data from Media Bias/Fact Check (MBFC), a website with human fact-checkers who analyze the accuracy and biases of more than 2,000 news sites; from MSNBC and Fox News; and from low-traffic content farms.
They then fed those data to a machine learning algorithm, and programmed it to classify news sites the same way as MBFC. When given a new news outlet, the system was then 65 percent accurate at detecting whether it has a high, low or medium level of factuality, and roughly 70 percent accurate at detecting if it is left-leaning, right-leaning, or moderate.
The team determined that the most reliable ways to detect both fake news and biased reporting were to look at the common linguistic features across the source’s stories, including sentiment, complexity, and structure.
For example, fake-news outlets were found to be more likely to use language that is hyperbolic, subjective, and emotional. In terms of bias, left-leaning outlets were more likely to have language that related to concepts of harm/care and fairness/reciprocity, compared to other qualities such as loyalty, authority, and sanctity. (These qualities represent a popular theory — that there are five major moral foundations — in social psychology.)
Co-author Preslav Nakov, a senior scientist at QCRI, says that the system also found correlations with an outlet’s Wikipedia page, which it assessed for general — longer is more credible — as well as target words such as “extreme” or  “conspiracy theory.” It even found correlations with the text structure of a source’s URLs: Those that had lots of special characters and complicated subdirectories, for example, were associated with less reliable sources.
“Since it is much easier to obtain ground truth on sources [than on articles], this method is able to provide direct and accurate predictions regarding the type of content distributed by these sources,” says Sibel Adali, a professor of computer science at Rensselaer Polytechnic Institute who was not involved in the project.

Nakov is quick to caution that the system is still a work in progress, and that, even with improvements in accuracy, it would work best in conjunction with traditional fact-checkers.
“If outlets report differently on a particular topic, a site like Politifact could instantly look at our fake news scores for those outlets to determine how much validity to give to different perspectives,” says Nakov.
Baly and Nakov co-wrote the new paper with MIT Senior Research Scientist James Glass alongside graduate students Dimitar Alexandrov and Georgi Karadzhov of Sofia University. The team will present the work later this month at the 2018 Empirical Methods in Natural Language Processing (EMNLP) conference in Brussels, Belgium.
The researchers also created a new open-source dataset of more than 1,000 news sources, annotated with factuality and bias scores, that is the world’s largest database of its kind. As next steps, the team will be exploring whether the English-trained system can be adapted to other languages, as well as to go beyond the traditional left/right bias to explore region-specific biases (like the Muslim world’s division between religious and secular).
“This direction of research can shed light on what untrustworthy websites look like and the kind of content they tend to share, which would be very useful for both web designers and the wider public,” says Andreas Vlachos, a senior lecturer at the University of Cambridge who was not involved in the project.
Nakov says that QCRI also has plans to roll out an app that helps users step out of their political bubbles, responding to specific news items by offering users a collection of articles that span the political spectrum.
“It’s interesting to think about new ways to present the news to people,” says Nakov. “Tools like this could help people give a bit more thought to issues and explore other perspectives that they might not have otherwise considered."

Friday, 23 November 2018

Using Machine Learning to protect Potential Harmful Application


Using Machine Learning to protect Potential Harmful Application:


Detecting PHAs is challenging and requires a lot of resources. Our security experts need to understand how apps interact with the system and the user, analyze complex signals to find PHA behavior, and evolve their tactics to stay ahead of PHA authors. Every day, Google Play Protect (GPP) analyzes over half a million apps, which makes a lot of new data for our security experts to process.


Leveraging machine learning helps us detect PHAs faster and at a larger scale. We can detect more PHAs just by adding additional computing resources. In many cases, machine learning can find PHA signals in the training data without human intervention. Sometimes, those signals are different than signals found by security experts. Machine learning can take better advantage of this data, and discover hidden relationships between signals more effectively.


There are two major parts of Google Play Protect's machine learning protections: the data and the machine learning models.


Data Sources


The quality and quantity of the data used to create a model are crucial to the success of the system. For the purpose of PHA detection and classification, our system mainly uses two anonymous data sources: data from analyzing apps and data from how users experience apps.


App Data



Google Play Protect analyzes every app that it can find on the internet. We created a dataset by decomposing each app's APK and extracting PHA signals with deep analysis. We execute various processes on each app to find particular features and behaviors that are relevant to the PHA categories in scope (for example, SMS fraud, phishing, privilege escalation). Static analysis examines the different resources inside an APK file while dynamic analysis checks the behavior of the app when it's actually running. These two approaches complement each other. For example, dynamic analysis requires the execution of the app regardless of how obfuscated its code is (obfuscation hinders static analysis), and static analysis can help detect cloaking attempts in the code that may in practice bypass dynamic analysis-based detection. In the end, this analysis produces information about the app's characteristics, which serve as a fundamental data source for machine learning algorithms.


Google Play Data



In addition to analyzing each app, we also try to understand how users perceive that app. User feedback (such as the number of installs, uninstalls, user ratings, and comments) collected from Google Play can help us identify problematic apps. Similarly, information about the developer (such as the certificates they use and their history of published apps) contribute valuable knowledge that can be used to identify PHAs. All these metrics are generated when developers submit a new app (or new version of an app) and by millions of Google Play users every day. This information helps us to understand the quality, behavior, and purpose of an app so that we can identify new PHA behaviors or identify similar apps.


In general, our data sources yield raw signals, which then need to be transformed into machine learning features for use by our algorithms. Some signals, such as the permissions that an app requests, have a clear semantic meaning and can be directly used. In other cases, we need to engineer our data to make new, more powerful features. For example, we can aggregate the ratings of all apps that a particular developer owns, so we can calculate a rating per developer and use it to validate future apps. We also employ several techniques to focus in on interesting data.To create compact representations for sparse data, we use embedding. To help streamline the data to make it more useful to models, we use feature selection. Depending on the target, feature selection helps us keep the most relevant signals and remove irrelevant ones.


By combining our different datasets and investing in feature engineering and feature selection, we improve the quality of the data that can be fed to various types of machine learning models.


Models

Building a good machine learning model is like building a skyscraper: quality materials are important, but a great design is also essential. Like the materials in a skyscraper, good datasets and features are important to machine learning, but a great algorithm is essential to identify PHA behaviors effectively and efficiently.
We train models to identify PHAs that belong to a specific category, such as SMS-fraud or phishing. Such categories are quite broad and contain a large number of samples given the number of PHA families that fit the definition. Alternatively, we also have models focusing on a much smaller scale, such as a family, which is composed of a group of apps that are part of the same PHA campaign and that share similar source code and behaviors. On the one hand, having a single model to tackle an entire PHA category may be attractive in terms of simplicity but precision may be an issue as the model will have to generalize the behaviors of a large number of PHAs believed to have something in common. On the other hand, developing multiple PHA models may require additional engineering efforts, but may result in better precision at the cost of reduced scope.



We use a variety of modeling techniques to modify our machine learning approach, including supervised and unsupervised ones.


One supervised technique we use is logistic regression, which has been widely adopted in the industry. These models have a simple structure and can be trained quickly. Logistic regression models can be analyzed to understand the importance of the different PHA and app features they are built with, allowing us to improve our feature engineering process. After a few cycles of training, evaluation, and improvement, we can launch the best models in production and monitor their performance.


For more complex cases, we employ deep learning. Compared to logistic regression, deep learning is good at capturing complicated interactions between different features and extracting hidden patterns. The millions of apps in Google Play provide a rich dataset, which is advantageous to deep learning.


In addition to our targeted feature engineering efforts, we experiment with many aspects of deep neural networks. For example, a deep neural network can have multiple layers and each layer has several neurons to process signals. We can experiment with the number of layers and neurons per layer to change model behaviors.


We also adopt unsupervised machine learning methods. Many PHAs use similar abuse techniques and tricks, so they look almost identical to each other. An unsupervised approach helps define clusters of apps that look or behave similarly, which allows us to mitigate and identify PHAs more effectively. We can automate the process of categorizing that type of app if we are confident in the model or can request help from a human expert to validate what the model found.



PHAs are constantly evolving, so our models need constant updating and monitoring. In production, models are fed with data from recent apps, which help them stay relevant. However, new abuse techniques and behaviors need to be continuously detected and fed into our machine learning models to be able to catch new PHAs and stay on top of recent trends. This is a continuous cycle of model creation and updating that also requires tuning to ensure that the precision and coverage of the system as a whole matches our detection goals.


Looking forward

As part of Google's AI-first strategy, our work leverages many machine learning resources across the company, such as tools and infrastructures developed by Google Brain and Google Research. In 2017, our machine learning models successfully detected 60.3% of PHAs identified by Google Play Protect, covering over 2 billion Android devices. We continue to research and invest in machine learning to scale and simplify the detection of PHAs in the Android ecosystem.



Acknowledgments

This work was developed in joint collaboration with Google Play Protect, Safe Browsing and Play Abuse teams with contributions from Andrew Ahn, Hrishikesh Aradhye, Daniel Bali, Hongji Bao, Yajie Hu, Arthur Kaiser, Elena Kovakina, Salvador Mandujano, Melinda Miller, Rahul Mishra, Damien Octeau, Sebastian Porst, Chuangang Ren, Monirul Sharif, Sri Somanchi, Sai Deep Tetali, Zhikun Wang, and Mo Yu.

Download the best internet security software for free

360 Internet Security

360 antivirus


360 antivirus is the most used application for PC, with a 96% market share
Our web browser is the second most used after Internet Explorer
The 360 Total Security home page is the most visited webpage in China
Our antivirus for mobile is the second most downloaded app in the country
360 Appstore is number one in the country and has served 160 million daily downloads to more than 600 million users
360 Search Engine is the second most important in the country

Install, register and sign in to 360 with this link and get a Premium license for FREE.




Tuesday, 7 August 2018

The Latest Sony Xperia R1 Has Finally Been Revealed With A Jaw Dropping Introductory Price

Jaw Dropping Sale For Sony Xperia R1











Product Description: Xperia R1 is a perfect fit for your hand and features a 5.2-inch HD display, an octa-core processor with 4G VoLte. The R1 is made for India, with sturdy and premium design, crisp and powerful sound, smooth performance. Capture amazing images with strong 13MP Exmor R sensor camera, with 8MP wide-angle front camera.


From the manufacturer


 
 
 

Perfect Hand Fit Design

Xperia R1 is not just a pleasure to behold. With its smooth, rounded frame and 2.5D curved glass, it fits and feels great in your hand.

Capture the Everyday Magic

With a 13MP predictive autofocus camera at hand, you’re always ready to catch and share life as it happens. In vivid, lifelike colors.

It's Always a Selfie Season

Xperia R1’s 8 MP wide angle front camera lets you fit in all your friends easily.


 
 
 

Best Mode for Best Captures

With 12 unique modes to capture your favorite images like portrait, landscape, sports, macro, night mode etc. Capture all the images with fine details in the best-suited mode.

Impressive Display

View it all in glorious detail. Xperia R1 features a high-quality 5.2 display with loop surface design and narrow borders so you can view and share everything.

The Performance you Need

Combine an octa core processor with 2GB RAM and a qualcomm snapdragon processor, you have the speed and power you need. No lag. No hassle. Just smooth performance.

Sony Xperia R1 Sale



Upload All the Fun Faster with UDC

Uplink data compression (UDC) improves data transmission, by compressing all uplink traffic so that the same information can be uploaded with fewer bits from the phone to the tower. The web page loads 50% faster and the experience of social media is smoother and faster.

4G Broadcast Ready

Xperia R1 is eMBMS (evolved multimedia broadcast multicast service) ready, which enables you to get real-time weather, sports, news updates basis your location, you can even stream live without any internet network requirement. Enjoy the updates on the go, with LTE broadcast.

Ready for Android 8.0 Oreo

With reinstalled android nougat OS, Xperia R1 is upgradable for android 8.0 Oreo, once it is ready, you will be able to receive notification.

Made for India

Xperia R1 is specially designed for Indian customers, with beautiful design, bright display and powerful sound, it never ceases to excite.

Technical Details:

OSAndroid 8.0 Oreo
RAM2 GB
Item Weight150 g
Product Dimensions14.6 x 0.8 x 7.2 cm
Batteries:1 Lithium ion batteries required. (included)
Item model numberXperia R1 Dual
Wireless communication technologiesBluetooth, WiFi Hotspot
Connectivity technologies4G LTE, GPRS, WiFi
Special featuresDual SIM, GPS, FM Radio, Proximity sensor, eCompass, Accelerometer, Light sensor, Hall sensor, Gyro sensor, E-mail
Other camera features8MP
Form factorTouchscreen Phone
Weight150 Grams
ColourBlack
Battery Power Rating2620
Whats in the boxHandset, Quick Charger, Type-C Data Cable, Startup Guide, Screen Guard and Stereo Headphones









Wednesday, 13 December 2017

Why is Litecoin is rising? Difference between Litecoin and Bitcoin?

Litecoin

Litecoin (LTC or Ł) is a peer-to-peer cryptocurrency and open source software project released under the MIT/X11 license. Creation and transfer of coins are based on an open source cryptographic protocol and is not managed by any central authority. While inspired by, and in most regards technically nearly identical to Bitcoin (BTC), Litecoin has some minor technical differences compared to Bitcoin and other major cryptocurrencies.



History

Litecoin was released via an open-source client on GitHub on October 7, 2011, by Charlie Lee, a former Google employee. The Litecoin network went live on October 13, 2011. It was a fork of the Bitcoin Core client, differing primarily by having a decreased block generation time (2.5 minutes), increased maximum number of coins, a different hashing algorithm (scrypt, instead of SHA-256), and a slightly modified GUI.

During the month of November 2013, the aggregate value of Litecoin experienced massive growth which included a 100% leap within 24 hours.

Litecoin reached a $1 billion market capitalization in November 2013.[8] By late November 2017, its market capitalization was US$4,600,081,733 ($85.18 per coin).

In May 2017, Litecoin became the first of the top-5 (by market cap) cryptocurrencies to adopt Segregated Witness.[11] Later in May of the same year, the first Lightning Network transaction was completed through Litecoin, transferring 0.00000001 LTC from Zürich to San Francisco in under one second.


Differences from Bitcoin


Litecoin is different in some ways from Bitcoin.

The Litecoin Network aims to process a block every 2.5 minutes, rather than Bitcoin's 10 minutes. The developers claim that this allows Litecoin to have faster transaction confirmation.
Litecoin uses scrypt in its proof-of-work algorithm, a sequential memory-hard function requiring asymptotically more memory than an algorithm which is not memory-hard.
Due to Litecoin's use of the scrypt algorithm, FPGA and ASIC devices made for mining Litecoin are more complicated to create and more expensive to produce than they are for Bitcoin, which uses SHA-256. Click for WikipediaOfficial Website.

Featured Post

The Google crawler is now Site Verifier User Agent

A new Google crawler, a new user agent, has been added to the Google spider list. Google Site Verifier User Agent is its name. By the way, t...

Popular Posts