Program
of
track
Industry

The web has proven to be a fertile ground for industrial research, and companies from small to large, from brand new to deeply entrenched, are making significant leaps forward in science and engineering. The WWW 2018 Industry Track highlights the spectrum of that research happening in industrial settings.

The track features presentations of research papers by authors from many major companies and contributing original results obtained in an industry environment, having obvious industry impact and highlighting new research challenges motivated by practical tasks and practical settings.

The track is “by-invitation” only, and the Chairs invite papers that already went through the rigorous peer review process of the research tracks, but appeared to be distinct from typical research papers in that they focus to a greater extent on solving non-trivial real-world problems and on building of systems that are deployed or are in the process of being deployed.

Industry Track Chairs:

  • Freddy Lécué (Accenture, Irlande)
  • Natasha Noy (Google, US)

List of selected papers :

  • Better Caching in Search Advertising Systems with Rapid Refresh Predictions
    Authors: Conglong Li, David G. Andersen, Qiang Fu, Sameh Elnikety and Yuxiong He

    Keywords: Sponsored search, Cache systems, Machine learning

    Abstract:
    To maximize profit and connect users to relevant products and services, search advertising systems use sophisticated machine learning algorithms to estimate the revenue expectations of thousands of matching ad listings per query. These machine learning computations constitute a substantial part of the operating cost, e.g., 10% to 30% of the total gross revenues. It is desirable to cache and reuse previous computation results to reduce this cost, but caching introduces approximation which comes with potential revenue loss. To maximize cost savings while minimizing the overall revenue impact, an intelligent refresh policy is required to decide when to refresh the cached computation results. The state-of-the-art refresh heuristic uses revenue history to assign different refresh frequencies. Using the gradient boosting regression tree algorithm, we show that a rapid prediction framework with well selected key features provides refresh decisions at higher accuracy compared to the heuristic. This enables us to build a prediction-based refresh policy and a cache achieving higher profit without manual parameter tuning.
    Simulations conducted on the logs from a major commercial search advertising system show that our proposed cache design improves the cost savings from 17% to 24% (1.41x) and reduces the negative revenue impact from -0.29% to -0.02% (0.07x) compared to the state-of-the-art manually-tuned heuristic-based cache design. Based on Microsoft’s FY16 Q4 earnings release, the heuristic-based cache would increase the net profit of Bing Ads by 20.7 to 70.5 million in the quarter, while our proposed cache could increase the net profit by 35.2 to 106.1 million (1.50~1.70x).

  • Attribution Inference for Digital Advertising using Inhomogeneous Poisson Models
    Authors: Zachary Nichols and Adam Stein

    Keywords: attribution, bayesian inference, generalized linear models

    Abstract:
    Measuring the causal effect of advertising on driving desired behavior is an important problem to the digital publishing industry (the “”attribution”” problem). It is common to use observational methods for attribution, due to the high cost and difficulty of employing randomized controlled trials (RCTs). However, recent results have shown that even current sophisticated observational methods may be inaccurate, yielding incorrect estimates of the true effect of advertising. Here, we present a new observational attribution method based on a successful model of neural spiking that learns the temporal interactions between event-based time series. We train this model on data from several RCT marketing experiments, and show that it can accurately recover the true causal attribution.

  • Beyond Keywords and Relevance: A Personalized Ad Retrieval Framework in E-Commerce Sponsored Search
    Authors: Su Yan, Wei Lin, Tianshu Wu, Daorui Xiao, Xu Zheng, Bo Wu and Kaipeng Liu

    Keywords: ad retrieval, e-commerce sponsored search, personalization

    Abstract:
    In most sponsored search platforms, advertisers bid on some keywords for their advertisements (ads). Given a search request, ad retrieval module rewrites the query into bidding keywords, and uses these keywords as keys to select Top N ads through inverted indexes. In this way, an ad will not be retrieved even if queries are related when the advertiser does not bid on corresponding keywords. Moreover, most ad retrieval approaches regard rewriting and ad-selecting as two separated tasks, and focus on boosting relevance between search queries and ads. Recently, in e-commerce sponsored search more and more personalized information has been introduced, such as user profiles, long-time and real-time clicks. Personalized information makes ad retrieval able to employ more elements (e.g. real-time clicks) as search signals and retrieval keys, however it makes ad retrieval more difficult to measure ads retrieved through different signals. To address these problems, we propose a novel ad retrieval framework beyond keywords and relevance in e-commerce sponsored search. Firstly, we employ historical ad click data to initialize a hierarchical network representing signals, keys and ads, in which personalized information is introduced. Then we train a model on top of the hierarchical network by learning the weights of edges. Finally we select the best edges according to the model, boosting RPM/CTR. Experimental results on our e-commerce platform demonstrate that our ad retrieval framework achieves good performance.

  • Attention Convolutional Neural Network for Advertiser-Level CTR Forecasting
    Authors: Hongchang Gao, Deguang Kong, Miao Lu, Xiao Bai and Jian Yang

    Keywords: Click-Through Rate, Advertiser-Level, Time Series, Attention Convolutional Neural Network, Context

    Abstract:
    Click-through rate (CTR) is a critical problem in online advertising. Most existing researches only focus on the user-level CTR prediction. However, advertiser-level CTR forecasting also plays a very important role because advertisers typically decide how much they would like to bid for advertisements to achieve the maximum clicks given their budget based on CTR forecasting. Over-forecasting will make the advertiser to pay more than necessary but get less return on investment (ROI). Under-forecasting will make the advertiser to spend less money on campaigns but they cannot achieve the desired ROI goals.
    In this paper, we focus on the advertiser-level CTR prediction and formulate it as a time series forecasting problem based on the historical CTR record. This is a very challenging problem due to the heavy fluctuation and highly non-linearity of time series.
    Furthermore, advertisers usually provide useful side information for their compaigns, such as text description, targeting locations and devices, which have high correlation with CTR but have not yet been used for CTR forecasting. Thus, we propose a novel context-aware attention convolutional neural network (CACNN), which can capture the high non-linearity and local information of the time series, as well as the underlying correlation between the time series of CTR and the context information. As far as we know, this is the first work employing convolutional neural network and incorporating heterogeneous information to perform time series forecasting at advertiser level.
    Extensive experiments on real advertiser data from a popular website confirmed the effectiveness of the proposed approach.

  • Discovering Progression Stages in Trillion-Scale Behavior Logs
    Authors: Kijung Shin, Mahdi Shafiei, Myunghwan Kim, Aastha Jain and Hema Raghavan

    Keywords: User modeling, Progression stages, MapReduce

    Abstract:
    User engagement is a key factor for the success of web services. Studying the following questions will help establishing business strategies leading to their success: How do the behaviors of users in a web service evolve over time? To reach a certain engagement level, what are the common stages that many users go through? How can we represent the stage that each individual user lies in?
    To answer these questions, we propose a behavior model that discovers the progressions of users’ behaviors from a given starting point – such as a new subscription or first experience of certain features – to a particular target stage such as a predefined engagement level of interest. Under our model, transitions over stages represent progression of users where each stage in our model is characterized by probability distributions over types of actions, frequencies of actions, and next stages to move. Each user performs actions and moves to a next stage following the probability distributions characterizing the current stage.
    We also develop a fast and memory-efficient algorithm that fits our model to trillions of behavioral logs. Our algorithm scales linearly with the size of data. Especially, its distributed version implemented in the MapReduce framework successfully handles petabyte-scale data with trillions of actions.
    Lastly, we show the effectiveness of our model and algorithm by applying them to real-world data from LinkedIn, where we discover meaningful stages that LinkedIn users go through leading to a predefined target goal. In addition, our trained models are shown to be useful for downstream tasks such as prediction of future actions.

  • Identifying Modes of User Engagement with Online News and Their Relationship to Information Gain in Text
    Authors: Nir Grinberg

    Keywords: User engagement, Online news, Information gain, Reading

    Abstract:
    Prior work established the benefits of server-recorded user engagement measures (e.g. clickthrough rates) for improving the results of search engines and recommendation systems.
    Client-side measures of post-click behavior received relatively little attention despite the fact that publishers have now the ability to measure how millions of people interact with their content at a fine resolution using client-side logging.
    In this study, we examine patterns of user engagement in a large, client-side log dataset of over 7.7 million page views (including both mobile and non-mobile devices) of 66,821 news articles from seven popular news publishers. For each page view we use three summary statistics: dwell time, the furthest position the user reached on the page, and the amount of interaction with the page through any form of input (touch, mouse move, etc.). We show that simple transformations on these summary statistics reveal six prototypical modes of reading that range from scanning to extensive reading and persist across sites. Furthermore, we develop a novel measure of information gain in text to capture the development of ideas within the body of articles and investigate how information gain relates to the engagement with articles. Finally, we show that our new measure of information gain is particularly useful for predicting reading of news articles before publication, and that the measure captures unique information not available otherwise.

  • DKN: Deep Knowledge-Aware Network for News Recommendation
    Authors: Hongwei Wang, Fuzheng Zhang, Xing Xie and Minyi Guo

    Keywords: news recommendation, knowledge graph representation, deep neural networks, attention model

    Abstract:
    Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. Moreover, news recommendation also faces the challenges of high time-sensitivity of news and dynamic diversity of users’ interests. To solve the above problems, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users’ diverse interests, we also design an attention module in DKN to dynamically aggregate a user’s history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.

  • Pixie: A system for Recommending 3+ Billion Items to 200+ Million Users in Real-Time
    Authors: Chantat Eksombatchai, Pranav Jindal, Jerry Zitao Liu, Yuchen Liu, Rahul Sharma, Charles Sugnet, Mark Ulrich and Jure Leskovec

    Keywords: recommender systems, collaborative filtering, random walks

    Abstract:
    User experience in modern content discovery applications critically depends on high-quality personalized recommendations. However, building systems that provide such recommendations presents a major challenge due to a massive pool of items, a large number of users, and requirements for recommendations to be responsive to user actions and generated on demand in real-time. Here we present Pixie, a scalable graph-based real-time recommender system that we developed and deployed at Pinterest. Given a set of user-specific pins as a query, Pixie selects in real-time from billions of possible pins those that are most related to the query. To generate recommendations, we develop Pixie Random Walk algorithm that utilizes the Pinterest object graph of 3 billion nodes and 17 billion edges. Experiments show that recommendations provided by Pixie lead up to 50% higher user engagement when compared to the previous Hadoop-based production system. Furthermore, we develop a graph pruning strategy at that leads to an additional 58% improvement in recommendations. Last, we discuss system aspects of Pixie, where a single server executes 1,200 recommendation requests per second with 60 millisecond latency. Today, systems backed by Pixie contribute to more than 80% of all user engagement on Pinterest.

  • Unveiling a Socio-Economic System in a Virtual World: A Case Study of an MMORPG
    Authors: Selin Chun, Daejin Choi, Jinyoung Han, Huy Kang Kim and Taekyoung Kwon

    Keywords: Socio-economic behavior analysis, MMORPG, Online games

    Abstract:
    Understanding the socio-economic system in MMORPGs can provide an important implication on how people participate in the economy and how people interact with each other. In this paper, we model the socio-economic system of an Aion, a popular MMORPG run by NCsoft, as a multi-layer graph. Using the dataset consisting of 94,870 users and their activity records spanning three months, we examine how economic activities are associated with social interactions, and find that social interactions like participating in a party or exchanging messages are highly correlated with the trade activities. We also find that virtual economy in Aion is heavily inclined to a small number of upper-class users who play a crucial role in virtual economy. Our analysis on the upper-class users reveals that a significant portion of them reach at the max-level and tend to either (i) have many social interactions with others or (ii) play extremely much time with no social activity. We also reveal that there are some low-level upper-class users who gain much money but hardly socialize with others. Lastly, we show how upper-class users who are at low-level, play the game extremely much more than others, or rarely interact with other users, are associated with the Real Money Trade (RMT), which may be an illegal behavior that gathers in-game money for exchanging into real-world money. We reveal that more than half of total money exchanged through the trade are associated with the upper-class users who involve in the RMT.

  • No Silk Road for Online Gamers!: Using Social Network Analysis to Unveil Black Markets in Online Games
    Authors: Eunjo Lee, Jiyoung Woo, Hyoungshick Kim and Huy Kang Kim

    Keywords: online game black market, real money trading, network analysis, community detection

    Abstract:
    Online game involves a very large number of users who are interconnected and interact with each other via the Internet. We studied the characteristics of exchanging virtual goods with real money through processes called “”real money trading (RMT).”” This exchange might influence online game user behaviors and cause damage to the reputation of game companies. We examined in-game transactions to reveal RMT by constructing a social graph of virtual goods exchanges in an online game and identifying network communities of users. We analyzed approximately 6,000,000 transactions in a popular online game and inferred RMT transactions by comparing the RMT transactions crawled from an out-game market. Our findings are summarized as follows: (1) the size of the RMT market could be approximately estimated; (2) professional RMT providers typically form a specific network structure (either star-shape or chain) in the trading network, which can be used as a clue for tracing RMT transactions; and (3) the observed RMT market has evolved over time into a monopolized market with a small number of large-sized virtual goods providers.

  • CrimeBB: Enabling Cybercrime Research on Underground Forums at Scale
    Authors: Sergio Pastrana, Daniel R. Thomas, Alice Hutchings and Richard Clayton

    Keywords: Underground Forums, Cybercrime, Money Laundering, Data Sharing, Web Crawling, Ethics, CrimeBot, CrimeBB

    Abstract:
    Underground forums allow criminals to interact, exchange knowledge, and trade in products and services. They also provide a pathway into cybercrime, tempting the curious to join those already motivated to obtain easy money. Analysing these forums enables us to better understand the behaviours of offenders and pathways into crime. Prior research has been valuable, but limited by a reliance on datasets that are incomplete or outdated. More complete data, going back many years, allows for comprehensive research about the evolution of forums and their users. We describe CrimeBot, a crawler designed around the particular challenges of capturing data from underground forums. CrimeBot is used to update and maintain CrimeBB, a dataset of more than 42m posts made from 826k accounts in 4 different operational forums over a decade. This dataset presents a new opportunity for large-scale and longitudinal analysis using up-to-date information. We illustrate the potential by presenting a case study using CrimeBB, which analyses which activities lead new actors into engagement with cybercrime. CrimeBB is available to other academic researchers under a legal agreement, designed to prevent misuse and provide safeguards for ethical research.

  • A Cross-Platform Consumer Behavior Analysis of Large-Scale Mobile Shopping Data
    Authors: Hong Huang, Bo Zhao, Hao Zhao, Zhou Zhuang, Zhenxuan Wang, Xiaoming Yao, Xinggang Wang, Hai Jin and Xiaoming Fu

    Keywords: mobile usage, e-commerce, consumer behavior, data mining, predictability

    Abstract:
    The proliferation of mobile devices especially smart phones
    brings remarkable opportunities for both industry and a-
    cademia. In particular, the massive data generated from
    users’ usage logs provide the possibilities for stakeholders to
    know better about consumer behaviors with the aid of data
    mining. In this paper, we examine the consumer behaviors
    across multiple platforms based on a large-scale mobile Inter-
    net dataset from a major telecom operator, which covers 9.8
    million users from two regions among which 1.4 million users
    have visited e-commerce platforms within one week of our
    study. We make several interesting observations and examine
    users’ cultural differences from different regions. Our analysis
    shows among the multiple e-commerce platforms available,
    most mobile users are loyal to their favorable sites; people
    (60%) tend to make quick decisions to buy something on-
    line, which usually takes less than half an hour. Furthermore,
    we find that people in residential areas are much easier to
    perform purchases than in business districts and purchases
    take place during non-work time. Meanwhile, people with
    medium socioeconomic status like browsing and purchasing
    on e-commerce platforms, while people with high and low
    socioeconomic status are much easier to conduct purchas-
    es online. We also show the predictability of cross-platform
    shopping behaviors with extensive experiments on the basis
    of our observed data. Our discoveries could be a good guide
    for e-commerce future strategy making.

  • Modeling Dynamic Competition on Crowdfunding Markets
    Authors: Yusan Lin, Peifeng Yin and Wang-Chien Lee

    Keywords: competition, competitiveness, crowdfunding, online market

    Abstract:
    The often fierce competition on crowdfunding markets can significantly affect project success. While various factors have been considered in predicting the success of crowdfunding projects, to the best knowledge of the authors, the phenomenon of competition has not been investigated. In this paper, we study the competition on crowdfunding markets through data analysis, and propose a probabilistic generative model, Dynamic Market Competition (DMC) model, to capture the competitiveness of projects in crowdfunding. Through an empirical evaluation using the pledging history of past crowdfunding projects, our approach has shown to capture the competitiveness of projects very well, and significantly outperforms several baseline approaches in predicting the daily collected funds of crowdfunding projects, reducing errors by 31.73% to 45.14%. In addition, our analyses on the correlations between project competitiveness, project design factors, and project success indicate that highly competitive projects, while being winners under various setting of project design factors, are particularly impressive with high pledging goals and high price rewards, comparing to medium and low competitive projects. Finally, the competitiveness of projects learned by DMC is shown to be very useful in applications of predicting final success and days taken to hit pledging goal, reaching 85% accuracy and error of less than 7 days, respectively, with limited information at early pledging stage.

  • A Feature-Oriented Sentiment Rating for Mobile App Reviews
    Authors: Washington Luiz, Felipe Viegas, Rafael Alencar, Fernando Mourão, Thiago Salles, Marcos Goncalves, Dárlinton Carvalho and Leonardo Rocha

    Keywords: Topic Model, Sentiment Analysis, Analysis of online reviews

    Abstract:
    In this paper, we propose a general framework that allows developers to filter, summarize and analyze user reviews written about applications on App Stores. More specifically, our framework is able to extract automatically relevant features from reviews of apps (e.g., information about functionalities, bugs, requirements, etc) and analyze the sentiment associated with each of them. Our framework has three main building blocks, namely, (i) topic modeling, (ii) sentiment analysis and (iii) summarization interface. Briefly speaking, the topic modeling block aims at finding semantic topics from textual comments. It represents the set of comments as a bag-of-words matrix and decomposes the properly represented dataset matrix into matrices that capture the latent relationships between terms and comments. It is also responsible for extracting the target features based on the most relevant words of each discovered topic. The sentiment analysis block detects the sentiment (i.e., positive or negative) associated with each discovered feature. Finally, the summarization interface provides to developers an intuitive visualization of the features (i.e., topics) and their associated sentiment, providing richer information than a `star rating’ strategy. Our experimental evaluation shows that the first block (topic modeling) is able to organize information provided by users in subcategories that facilitate the understanding of which features more positively/negatively impact the overall evaluation of the application. Regarding user satisfaction, we can observe in our experiments that, in spite of the star rating being a good measure of evaluation, the Sentiment Analysis technique is more accurate in capturing the sentiment transmitted by the user by means of a comment.

  • Towards Automatic Numerical Cross-Checking: Extracting Formulas from Text
    Authors: Yixuan Cao, Hongwei Li, Ping Luo and Jiaquan Yao

    Keywords: Information Extraction, Relation Extraction, Iterative Relation Extraction, Numerical Cross-Checking, Formula Extraction

    Abstract:
    Verbal descriptions over the numerical relationships among some objective measures widely exist in the published documents on Web, especially in the financial fields. However, due to large volumes of documents and limited time for manual cross-check, these claims might be inconsistent with the original structured data of the related indicators even after official publishing. Such errors can seriously affect investors’ assessment of the company and may cause them to undervalue the firm even if the mistakes are made unintentionally instead of deliberately. It creates an opportunity for automated Numerical Cross-Checking (NCC) systems. This paper introduces the key component of such a system, formula extractor, which extracts formulas from verbal descriptions of numerical claims. Specifically, we formulate this task as a DAG-structure prediction problem, and propose an iterative relation extraction model to address it. In our model, we apply a bi-directional LSTM followed by a DAG-structured LSTM to extract formulas layer by layer iteratively. Then, the model is built using a human-labeled dataset of tens of thousands of sentences. The evaluation shows that this model is effective in formula extraction. At the relation level, the model achieves a 97.78% precision and 98.33% recall. At the sentence level, the predictions over 92.02% of sentences are perfect. Overall, the project for NCC has received wide recognition in the Chinese financial community.

  • PhotoReply: Automatically Suggesting Conversational Responses to Photos
    Authors: Ning Ye, Ariel Fuxman, Vivek Ramavajjala, Sergey Nazarov, Patrick McGregor and Sujith Ravi

    Keywords: image, natural language, generative model, social, chat

    Abstract:
    We present an intelligent agent that automatically suggests responses to photos shared in messaging applications. For example, when a user receives a photo showing a dog, the agent suggests responses such as “”Aww!”” and “”Cute terrier!”” This simplifies composing responses on constrained mobile keyboards, and delights users with uncanny insights into the photos they receive. The agent is now an integral part of a major chat application, being its predictive assistance feature with highest click-through rate.
    We formalize the problem of suggesting responses to images as an instance of multimodal learning which is akin to caption generation models, a topic that has recently received significant attention in the research community. We then present a system that “translates” image pixels to text responses. The system includes a conditioned language model, based on an LSTM, which, given an embedding of image pixels and the previous predicted words, calculates the probability of all words in a vocabulary of being the next word in the generated response; and a triggering model trained with image embeddings and concept labels from a large concept taxonomy. We describe training of the models and a thorough experimental evaluation based on crowdsourced datasets and live traffic.

  • Hidden in Plain Sight: Classifying Emails Using Embedded Image Contents
    Authors: Navneet Potti, James B. Wendt, Qi Zhao, Sandeep Tata and Marc Najork

    Keywords: information extraction, wrapper induction, email

    Abstract:
    A vast majority of the emails received by people today are machine-generated by businesses communicating with consumers. While some emails originate as a result of a transaction (e.g., hotel or restaurant reservation confirmations, online purchase receipts, shipping notifications, etc.), a large fraction are commercial emails promoting an offer (a special sale, free shipping, available for a limited time, etc.). The sheer number of these promotional emails makes it difficult for users to read all these emails and decide which ones are actually interesting and actionable. In this paper, we tackle the problem of extracting information from commercial emails promoting an offer to the user. This information enables an email platform to build several new experiences that can unlock the value in these emails without the user having to navigate and read all of them. For instance, we can highlight offers that are expiring soon, or display a notification when there’s an unexpired offer from a merchant if your phone recognizes that you are at that merchant’s store.
    A key challenge in extracting information from such commercial emails is that they are often image-rich and contain very little text. Training a machine learning (ML) model on a rendered image-rich email and applying it to each incoming email can be prohibitively expensive. In this paper, we describe a cost-effective approach for extracting signals from both the text and image content of commercial emails in the context of a free email platform that serves over a billion users around the world. The key insight is to leverage the template structure of emails, and use off-the-shelf OCR techniques to obtain the text from images to augment the existing text features offline. Compared to a text-only approach, we show that we are able to identify 9.12% more email templates corresponding to ~5% more emails being identified as offers. Interestingly, our analysis shows that this 5% improvement in coverage is across the board, irrespective of whether the emails were sent by large merchants or small local merchants, allowing us to deliver an improved experience for everyone.

  • Mining E-Commerce Query Relations using Customer Interaction Networks
    Authors: Bijaya Adhikari, Parikshit Sondhi, Wenke Zhang, Mohit Sharma and B. Aditya Prakash

    Keywords: Query Graphs, Customer Interaction, Graph Mining

    Abstract:
    Customer Interaction Networks (CINs) are a natural framework for representing and mining customer interactions with E-Commerce search engines. Customer interactions begin with the submission of a query formulated based on an initial product intent, followed by a sequence of product engagement and query reformulation actions. Engagement with a product (eg. clicks), signals its relevance to the customer’s product intent. Reformulation to a new query indicates either dissatisfaction with current results, or an evolution in the customer’s product intent. Analyzing such interactions within and across sessions, enables us to discover various query-query and query-product relationships.
    In this work, we begin by studying the properties of a real-world customer interaction network developed using Walmart.com’s product search logs. We observe that CINs exhibit significantly different properties compared to other real world networks (eg. WWW, social networks etc.), making it possible to mine intent relationships between queries, based purely on their structural information. In particular, we show that one can formulate the problem of clustering queries with similar intents, as a community detection task on CINs. Our results show that existing community detection methods already do a good job at identifying intent based query clusters, without using any textual features. We further identify their limitations and propose improved methods for the task. Finally we show how these relations can be exploited to a) significantly improve search quality for poorly performing queries, and b) identify the most influential (aka. Critical) queries whose search quality is crucial in enabling an E-Commerce search engine satisfy the most customers. Via extensive experiments, we show that our CIN based methods significantly outperform existing baselines in practice.

  • Learning to Collaborate: Multi-Scenario Ranking via Multi-Agent Reinforcement Learning
    Authors: Jun Feng, Heng Li, Minlie Huang, Shichen Liu, Wenwu Ou, Zhirong Wang and Xiaoyan Zhu

    Keywords: multi-agent learning, reinforcement learning, learning to rank, joint optimization

    Abstract:
    Ranking is a fundamental and widely studied problem in scenarios such as search, advertising, and recommendation. However, joint optimization for multi-scenario ranking, which aims to improve the overall performance of several ranking strategies in different scenarios, is rather untouched. Separately optimizing each individual strategy has two limitations. The first one is lack of collaboration between scenarios meaning that each strategy maximizes its own objective but ignores the goals of other strategies, leading to a sub-optimal overall performance. The second limitation is the inability of modeling the correlation between scenarios meaning that independent optimization in one scenario only uses its own user data but ignores the context in other scenarios.
    In this paper, we formulate multi-scenario ranking as a fully cooperative, partially observable, multi-agent sequential decision problem. We propose a novel model named Multi-Agent Recurrent Deterministic Policy Gradient (MA-RDPG) which has a communication component for passing messages, several private actors (agents) for making actions for ranking, and a centralized critic for evaluating the overall performance of the co-working actors. Each scenario is treated as an agent (actor). Agents collaborate with each other by sharing a global action-value function (the critic) and passing messages that encodes historical information across scenarios. The model is evaluated with online settings on a large E-commerce platform. Results show that the proposed model exhibits significant improvements against baselines in terms of the overall performance.

  • HTTP/2 Prioritization and its Impact on Web Performance
    Authors: Maarten Wijnants, Robin Marx, Peter Quax and Wim Lamotte

    Keywords: HTTP/2, Web Performance Optimization (WPO), Page Load Time (PLT), Resource Loading, Prioritization

    Abstract:
    Web performance is a hot topic, as many studies have shown a strong correlation between slow web pages and loss of revenue due to user dissatisfaction. Front and center in Page Load Time (PLT) optimization is the order in which resources are downloaded and processed. The new HTTP/2 specification includes dedicated resource prioritization provisions, to be used in tandem with resource multiplexing over a single, well-filled TCP connection. However, little is yet known about its application by browsers and its impact on page load performance.
    This article details an extensive survey of modern User Agent implementations, with the conclusion that the major vendors all approach HTTP/2 prioritization in widely different ways, from naive (Safari, IE, Edge) to complex (Chrome, Firefox). We investigate these discrepancies with a full factorial experimental evaluation involving eight prioritization algorithms, two off-the-shelf User Agents, 40 realistic webpages, and five heterogeneous (emulated) network conditions. We find that in general the more complex approaches yield the best results, while the naive implementations can lead to over 25% slower median visual load times. Furthermore, prioritization is found to matter most for heavy-weight pages. Finally, it is ascertained that achieving PLT optimizations via generic server-side HTTP/2 re-prioritization schemes is a non-trivial task and that their performance impact is influenced by the implementation intricacies of individual browsers.