SilverPush leads the industry with the best demand side platform and other products like Prism, Javelin and Parallels. We help brands to maximize the advertorial reach to their target audience pool, managed by a user-friendly dashboard. When it comes to digital advertising, we provide customized solutions backed by real time analytics, to help you plan, buy, measure & optimize TV & digital media. https://silverpush.co/

BTemplates.com

Showing posts with label computer vision applications. Show all posts
Showing posts with label computer vision applications. Show all posts

Monday, 6 July 2020

Beyond Black and White: The True Color of Brand Safety




Over the past few years, a lot of brand safety issues have surfaced that have led marketers to review their brand safety measures. The current coronavirus crisis has intensified the brand safety woes of marketers, as most of the brands don't want ad adjacency to the content dealing with morbidity and mortality. 

Common brand safety methods used by marketers include blacklisting and whitelisting. Blacklisting involves avoiding placement of ads against content containing one or more blocked keywords. In case of video content, a blocked keyword is searched in topic, title, description and metadata.
Keyword-based blacklisting method is in reality not that effective as it seems to be. It is marred by under- and over-blocking of content. Research shows that because of the use of keyword blacklists, more than half of the safe stories published on the major news platforms are being incorrectly tagged as brand unsafe.

Keyword-based blacklisting method can lead to blocking of completely innocuous content. This is because it fails to comprehend the nuances in context, i.e. it is unable to understand the true context in which a keyword is used. For example, if "alcohol" is the blocked keyword, then the blacklisting method will not only tag a video featuring drunk and driving as unsafe, but will also tag a video featuring a recipe in which alcohol has been used as one of the ingredients, as unsafe.

Another problem with blacklisting is that universal blacklists cannot be created. They have to be regularly updated and modified according to the brands' requirements, current happenings and events, latest news, countries, languages and culture. There is also a requirement to tweak blacklists regularly on the basis of current safe content consumption patterns of consumers, so that increased reach for the advertising campaigns can be achieved. Overall, keyword-based blacklisting method is quite cumbersome to implement as it needs a lot of fine-tuning. With this method, content under- and over-blocking is a common problem, and this hinders marketers in getting optimal results from their advertising campaigns.

A whitelist enlists content that has been labeled as safe for ads to be placed against it. A whitelist provides a safe and trusted environment to brands to advertise within. Curating a whitelist for advertising on a video platform, for example for YouTube advertising, involves tagging unsafe content at the keyword, topic, video and channel levels. Video-level tagging helps brands to filter out unsafe videos from an otherwise safe channel; brands do not have to blacklist the entire channel just because of one or few unsafe videos.

Again, like keyword blacklists, whitelists need to be regularly updated, otherwise the campaigns will not witness an increase in reach, and brands will miss newer safe and engaging content for their ads; ads will keep displaying against the same video content enlisted in the static whitelist. 

Creation of whitelists is not an easy process; it requires a lot of curation by marketers, and is time-consuming and expensive. As the whitelisting method limits the number of videos against which ads can be placed, marketers are unable to take the full advantage of the true potential of huge video hosting platforms like YouTube. The campaign's reach gets reduced and the right audience does not get fully targeted.

The above-mentioned brand safety methods provide only suboptimal brand safety and have significant limitations. A highly effective way of ensuring brand suitability and safety is provided by contextual brand safety method that makes use of AI and computer vision. AI-powered brand safety platforms that deploy computer vision technology, provide high degree of context relevance unmatched by keyword-based methods.

Computer vision can accurately detect contexts in videos such as faces, objects, logos, on-screen text, emotions, scenes and activities. Thus, it can effectively detect unsafe or harmful contexts in videos without the risk of under- and over-blocking of content.

Amid the coronavirus pandemic, computer vision-powered brand safety platforms enable brands to selectively block ads against mortality-related coronavirus content, while allowing ad placement against positive coronavirus content. Thus, brands can safely capitalize on the news content; this is not possible with keyword-blacklists that fail to understand the true context in which the keyword "coronavirus" is being used.

By using AI-based contextual brand safety method, marketers can not only effectively block ad placement against recognized unsafe categories, but can also custom define unsuitable contexts that are unique to a brand. This helps them provide a fully suitable environment to brands for advertising.         

Computer vision enables marketers to go beyond blacklists and whitelists in order to achieve brand safety in its true color.    

Thursday, 2 July 2020

Synergistic Approach to Visual Content Moderation Is Both Effective and Efficient




Enormous amount of content in the form of images, videos and text is posted on the world wide web on an hourly basis. As this content is posted by users around the globe, the nature of the content is highly heterogeneous.

User-generated content carries an immanent risk of being inappropriate, harmful, offensive, or dangerous. This content can be classified into the categories such as nudity, terrorism, hatred, child exploitation, violence, misinformation, etc. and requires strict moderation.

Content moderation is commonly achieved through human moderators. AI-based content moderation has also emerged and offers an automated way to filter out inappropriate content.

The enormous and heterogeneous user generated content cannot be moderated effectively and efficiently by using just one method of moderation - manual or automatic. The best approach is synergistic, i.e. using the combination of both human and AI moderation. Social media platforms are increasingly using the synergistic approach for achieving optimum level of content moderation.   

By using the synergistic approach for content classification and moderation, online platforms can enjoy the benefits of both human and AI moderation - the intelligence, wisdom and judgement of human beings, and the capability of AI-powered platforms to evaluate enormous amount of content in no time.         

AI content moderation platforms powered by computer vision makes image and video moderation highly efficient. Computer vision can detect faces, emotions, objects, logos, on-screen text, actions and scenes in the images and videos with high accuracy. Such platforms can determine whether the images or videos should be reviewed by a human content moderator or not. Thus, human moderators are saved from filtering out large volumes of content themselves; this also saves them from viewing mentally disturbing content in large quantities on a daily basis. They can look only at the images and videos flagged by the AI platform and make a publishing decision. The decision taken by the human moderator feeds back into the algorithm, but the reason for the decision does not.    

AI makes content moderation much easier for human moderators. By considering a number of factors, an advanced AI content moderation algorithm can calculate a relative risk score to determine if a user's post should be posted immediately after creation, reviewed before posting, or should not be posted. This relative score can then be used by human moderators while making a publishing decision.

Although AI content classification and moderation enables online platforms to hire less number of human moderators, the need for human moderation will always remain and is indispensable. Without human moderators, accurate content moderation is not possible. Only human content moderators can make decisions that lie in the gray areas of decision-making, view a user's content from a subjective perspective, understand cultural context of content, etc.

Armed with a computer vision powered video and image moderation platform, human content moderators easily identify and filter out inappropriate visual content from large volumes of user generated content posted on online platforms.

By following a synergistic approach, which involves using both AI and human moderation, online platforms dealing with loads of user generated content can achieve efficient and effective content moderation.

Tuesday, 30 June 2020

Which Is Better for Your Business – Manual or AI Visual Content Moderation?





Visual content moderation is important for businesses or brands, especially if they have to deal with a lot of user-generated visual content. Any association with inappropriate content can damage their reputation, weaken consumer trust and result in decrease in sales.

Traditionally, visual content classification and moderation has been done manually. But with the advent of AI, automated content moderation platforms have emerged. These platforms make use of computer vision and provide an effective way for image and video classification and moderation.       
Whether a brand or business should moderate visual content manually, use AI-powered automated content moderation or augment manual moderation with an automated one, depends on a number of factors. These factors are discussed here below –

Source of content
In order to build brand recognition and consumer trust, more and more brands are now allowing user-generated content on their own platforms. However, user-generated content is potentially risky and can include inappropriate matter that can be highly damaging for the brands. Although brands can dictate their content posting guidelines to users, they do not have actual control over what a user is posting. Moderating such content is a must for brands. As there are high chances of user-generated visual content being inappropriate or unsuitable, brands should opt for computer vision-powered video and image classification and moderation platform.
If in case, most of a brand’s visual content is not user-generated, but is sourced internally or from highly trust-worthy third parties, then for such a brand, video and image moderation can be performed manually by hiring human content moderators and there is a lesser need for an automated system.

Volume of content
For brands that have to deal with a good volume of visual content, especially user-generated content, manual moderation does not work effectively and efficiently. They should make use of computer vision-powered image and video moderation platforms.
AI-powered systems can tackle enormous content volume with a high degree of accuracy. Computer vision technology effectively classifies and tags visual content at scale. Such automated systems are not plagued by human errors, can work continuously unlike human beings, and their algorithms get self-trained from the data they handle.

Nature of content 
An automated AI content moderation platform can effectively filter out content such as “not safe for work” images and videos, and other forms of inappropriate, offensive or dangerous content, but it falls short when it comes to filtering out misinformation. Here, human intervention from human content moderators is required.
User-generated visual content can be highly mentally disturbing for human content moderators. Filtering out such content through automated computer vision powered content classification and moderation platform is the best way to prevent ill effects on mental health.  
Hiring a large number of human moderators is quite expensive and may not be feasible for businesses with small budgets. Also, in most of the cases, as discussed above, manual moderation is less effective than computer vision powered visual content moderation.
For brands or businesses that have to handle a large amount of user-generated visual content, computer vision-based content moderation is much better than manual moderation in terms of accuracy, effectiveness and efficiency.


Thursday, 26 March 2020


Contextual Targeting Offers the Most Viable Advertising Strategy in the GDPR Era


The General Data Protection Regulation (GDPR) has changed the way businesses can handle the personal data of the citizens of the European Union. Any individual, company or organization, whether located in EU or not, that stores or processes EU citizens’ personal data must comply with the GDPR. The GDPR, which came into effect on 25 May 2018, enables EU citizens to exercise control over their personal data.

Article 4(1), defines personal data as follows –



“Personal data means any information relating to an identified or identifiable natural person (data subject); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”


Online advertising industry is one of the most affected industries by the GDPR. Online advertisers use third-party cookies as the main tool to track users’ online activities for serving them highly specific ads. Cookies are small text files that gets stored in the users’ web browsers.


Third-party cookies serve as a trace for advertisers. Through third-party cookies, advertisers are able to create a rich profile of users that include the websites they visit, their interests, products they buy, and more. Third-party cookies store enough user data to come under the GDPR scanner. 


According to the CIGI-Ipsos Global Survey on Internet Security and Trust 2019, in which more than 25,000 internet users participated from twenty-five countries across the globe, - In 2019, 78% of survey respondents said they were very concerned or somewhat concerned about their online privacy. 53% said they were much more concerned or somewhat more concerned than they were a year ago. While 78% of all respondents in 2019 were concerned about their online privacy, 90% or more were concerned in Egypt, Hong Kong, India, Nigeria, and Mexico, with more than 85% concerned in South Africa, Indonesia, and South Korea.

With rising privacy concerns among the consumers, coming into effect of the GDPR and the California Consumer Privacy Act (CCPA), and the gradual phasing-out of third-party cookies in Chrome by Google, the digital marketers have started looking into alternate ways delivering online ads to consumers that are both effective and compliant with the personal data protection regulations. 

In the era of GDPR and other online privacy laws, contextualtargeting offers an effective way for advertisers to display online ads, while being compliant with the privacy regulation. Contextual advertising allows advertisers to display ads on a website by targeting its content. Ads are displayed on the basis of keywords or topics. This method, therefore, displays ads that are relevant to the content, and hence, increases the chances of users clicking on the ads. For example, if a brand wants to sell smartphones, then it can have its ads placed on the websites that have content about smartphones, gadgets, technology, etc.


An advanced form of contextual advertising involves semantic targeting, which makes use of machine learning algorithms to understand the meaning of each page of content on a website, rather than just looking for keywords placed on a web page.


As with text content, contextual targeting offers a GDPR compliant advertising solution for video content. Conventional contextual video advertising works by identifying keywords. This often results in placement of irrelevant ads. 

The innovative artificial intelligence and computer vision powered in-video contextual advertising technology overcomes the limitations of traditional contextual advertising. It offers an effective, GDPR-compliant solution to advertisers for displaying contextually relevant in-video ads to users. It works by detecting faces, objects, emotions, logos, activities and scenes in video content. It then serves the ads that are fully in line with what the user is currently watching, thus allowing for a very high chance of user engagement.       

The contextual, artificial intelligence advertising does not collect, store or utilize users’ personal data for displaying ads, thus offering a GDPR-compliant approach. It only considers what a user is currently engaging with and serves him the contextually relevant ads.

The AI-powered contextual targeting is highly effective for the advertisers, non-annoying for the consumers, and in compliance with the GDPR.     

























Tuesday, 3 March 2020

We Are Excited to Announce Our New Brand Identity



We are delighted to announce our new brand identity as part of the ongoing evolution of our brand. Our business has grown, our technology has evolved, we are digging into new areas and have launched new products, and so we thought that it’s time for a change. We have refreshed our logo and website to reflect who we are today and to symbolize our future.

Our new brand identity resonates with our focus on AI-powered in-video ads contextual advertising. The new brand identity perfectly aligns our company with our successful foray into offering cutting edge AI-powered solutions that are redefining limits of in-video contextual targeting.

With blue and green colors in our new website, we have centered our new identity around AI and technology, keeping it modern and focused on trust. The yellow color imbibes the fresh and playful characteristics of the brand - offering flexibility for future innovation. These branding elements have also translated into a new logo, which projects motion and pace.


in-video ads

We started our journey in 2012 as the first Demand Side Platform in India. Since then, we have brought many innovative products to the market, including the first of its kind Cross-Device Ad Targeting solution launched in 2014, and the Real-time Moment Marketing platform, Parallels, in 2018.

We launched Mirrors, the first computer-vision powered in-video contextual advertising platform, in 2019. Mirrors is able to effectively detect contexts like faces, objects, activities, emotions, scenes and logos in a streaming video for placement of context-relevant ads. Through Mirrors, we have helped some of the largest brands in world in achieving unprecedented reach and user engagement.
Our new brand identity helps us in effectively bringing into light our three inherent characteristics – creator, explorer and jester.

As a creator, we love to focus on innovation and quality. We always want to contribute to society by bringing something new into being, i.e. by realizing a vision. We draw deep satisfaction from our efforts of creating something new that did not previously exist but has the potential to revolutionize the industry. Our in-video contextual advertising platform based on artificial intelligence (AI) and computer vision is a product of our creator mind and is ushering a new era in ad tech industry.     
   
Our explorer characteristic is exhibited in our desire and efforts to do groundbreaking and pioneering work. We want to have an explorer’s attitude towards the work we do and the way we do it. We don’t want to take the conventional, pre-defined path, but want to pave our own path and discover our own way of doing things so that we can bring ingenious products in the market. We want to be free from constraints, feel the freedom to explore the technology in our own way, and enjoy our discoveries and innovations. Our explorer trait makes us utilize our capacities to the fullest, thereby allowing us to push the boundaries.             

Computer Vision Applications
Our fun-loving, light-hearted and playful approach is a reflection of our jester trait. We think outside the box to develop innovative products that make people feel good. We combine fun with creativity and intelligence to offer ingenious solutions to ad tech industry. Being quick-witted, highly adaptable and resourceful, we reframe concepts, present new perspectives and stir up changes. Our jester trait helps us to easily navigate through difficult times and emerge as a real winner.  

With this new company branding, we have now moved beyond our legacy. We have always been a first mover in problems we have solved before, be it disrupting cross-device tracking or effective push notifications. We are now completely focused on transforming how advertisers reach their customers contextually with our unique offerings, and our new brand identity reflects this. Our tryst with AI and emerging technologies will continue and we will be launching new line of innovative products for the advertising industry in the future.