Connecting Context to Action

Deep links solve the problem of linking into apps.  If you know exactly where you want to go, deep links can take you there.

However, they don’t help us solve the challenge of finding the right place on our phones to do what we want.  There isnt a straightforward way to connect the functionality of our amazing apps together in a coherent way.  And there is still no good way to use deep links to discover content in apps when its relevant to us.

These are the challenges we’ve been consumed with solving at URX over the last two years, and I’m thrilled to announce the next step in our journey.

Introducing AppViews  

AppViews power smart connections between apps by recommending relevant actions for a user.  These actions could be anything from finding a room in Airbnb, to visiting a board in Pinterest, to listening to an album in Spotify, or even booking a ride with Lyft.  AppViews use signals such as page content, location, and current activity to suggest a relevant next action for a user to take.  

In order to match user context with app actions, we had to build an understanding of the entities inside mobile apps and how they relate to each other and the physical world.  The short video below explains a bit more about our knowledge graph, our vision, and shows several examples of AppViews today.

As Joe so eloquently puts it, the goal is to understand what a user is doing and what could make them happier.

We’ve created an initial UI type for AppViews, but developers are free to use our SDKs and APIs to create a completely custom interface as well with the images and meta data that return in our recommendations.  Click here for more examples of AppViews integrations.

Contextual app discovery

In addition to providing users with related actions to take, developers can also get paid to show Promoted AppViews.  Unlike mobile ads today, Promoted AppViews are relevant to their content and don’t compromise their experience.  In the words of our developers - AppViews help me get paid for adding features to my app.

Marketers love the opportunity to reach users with high intent to use their app.  We’re working with top mobile advertisers like Spotify, SeatGeek, and Lyft to drive high-intent user acquisition and re-engagement by tapping into context across the ecosystem.  Early results show engagement rates 4x higher than traditional mobile ads.

Today, AppViews are live on over 250 mobile sites and apps that reach over 100 million monthly active users.  We want to thank all of our partners from Fitocracy, Nexercise, Tripomatic, Slidejoy, Bandsintown, Lockfeed and many others who have helped us along the way.

Developers can create AppViews today by logging into dashboard.urx.com and marketers can sign up and we’ll be in touch shortly.  Please join us as we continue our mission of bringing relevance to mobile devices and reconnect apps to the web. 

John

The Science of Crawl (Part 3): Prioritization

Here at URX, we've built the world's first deep link search API. Developers who integrate our API can monetize their apps or websites with deep links to contextually relevant actions in other apps. To ensure the content we surface on publisher sites and applications is relevant, we've built a search engine on top of a large web corpus. Publisher pages are used as search queries to discover the most relevant third party documents to surface from within our corpus. This corpus of web documents has been meticulously maintained and carefully grown by crawling internet content. We've come to discover that building a functional crawler can be done relatively cheaply, but building a robust crawler requires overcoming a few technical challenges.

This is Part 3 in our series "The Science of Crawl" addressing the technical challenges around growing and maintaining a web corpus. In Part 1, we introduced a funnel for deduplicating web documents within a search index. The dual problems of exact-duplicate and near-duplicate web document identification are considered. By chaining together several methods with increasing specificity we identify a system that provides sufficient precision and recall with minimal computational trade-offs. In Part 2 we visit the problem of balancing resource allocation between crawling new pages and revisiting stale ones.

In this post, we look at the challenge of prioritizing which web documents to capture first. To ensure our search engine contains relevant results for our publishers, we need a crawler which continuously discovers new content. To provide new content into the search engine, a persistent web crawler extracts previously unseen links within pages and adds the links onto a priority queue. The new link is then popped from the queue and subsequently traversed. The resulting content is downloaded and indexed into the search engine.

Naively, links in the queue could be ordered temporally, based on the classic First-In-First-Out (FIFO) paradigm. However, given the scale at which URX crawls web content, there are hundreds of millions of uncrawled links queueing at any given time. Given such large volume there is likely to be important documents arbitrarily distributed across the queue, uncorrelated with temporal ordering. There is a need to assess the potential value of a link even before pushing it to the queue.

Research within the field of prioritization can be broken into two areas: query relevance and query-independent importance. Query relevance refers to biasing a crawler to download content "most relevant" to the content actively searched. Query independent importance refers to leveraging the graph structure of web links to encourage an "importance" based prioritization. There are tradeoffs to both. For query relevance methods, it is difficult to evaluate the correlation between unseen links and search intents. The only information available is often the link itself and surrounding anchor text. On the other hand, query-independent methods typically represent the web as a graph structure, where vertices are web pages and edges are links. This formulation yields a natural interpretation of importance as connectivity within the graph. However, crawling new pages using graph-based importance may not necessarily correlate with improvement in overall user experience. Ultimately, a solution which combines and balances both methodologies may yield higher accuracy.

Query-independent importance

The first facet of prioritization estimates page importance independent of user search queries. By representing the web as a large graph, classic methods can be used to discover important nodes. Larry Page's famous pagerank is perhaps the most popular method for estimating the importance of pages. The method seeks to solve the iterative equation (taken from Wikipedia)

$$ PR(u) = \sum_{v\in B_u}{\frac{PR(v)}{L(v)}} $$

where $PR(u)$ is the relative rank of a page $u$, $B_u$ is the set of all pages pointing to $u$, and $L(v)$ is the number of out-links in page $v$.

At a high level, a page is important if: 1) many pages point into it and 2) the incoming pages are themselves, important. The PageRank problem is well studied. It was discovered that solutions of this iterative system are degenerate - often converging to the page with highest rank. To fix this issue, Page and Brin added a small connection weight between all pairs of pages to create a more stable system. This is realized by adding a damping factor $d$ normalized by the number of pages $N$ to the above equation:

$$ PR(u) = \frac{1-d}{N} + d\sum_{v\in B_u}{\frac{PR(v)}{L(v)}} $$

Below is a simple PageRank implementation taken from one of Apache Spark's examples with a damping factor of 0.85

In [18]:
# Taken from https://github.com/apache/spark/blob/master/examples/src/main/python/pagerank.py
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

"""
This is an example implementation of PageRank. For more conventional use,
Please refer to PageRank implementation provided by graphx
"""

import re
import sys
from operator import add

from pyspark import SparkContext


def computeContribs(urls, rank):
    """Calculates URL contributions to the rank of other URLs."""
    num_urls = len(urls)
    for url in urls:
        yield (url, rank / num_urls)


def parseNeighbors(urls):
    """Parses a urls pair string into urls pair."""
    parts = re.split(r'\s+', urls)
    return parts[0], parts[1]


if __name__ == "__main__":
    if len(sys.argv) != 3:
        print >> sys.stderr, "Usage: pagerank <file> <iterations>"
        exit(-1)

    print >> sys.stderr,  """WARN: This is a naive implementation of PageRank and is
          given as an example! Please refer to PageRank implementation provided by graphx"""

    # Initialize the spark context.
    sc = SparkContext(appName="PythonPageRank")

    # Loads in input file. It should be in format of:
    #     URL         neighbor URL
    #     URL         neighbor URL
    #     URL         neighbor URL
    #     ...
    lines = sc.textFile(sys.argv[1], 1)

    # Loads all URLs from input file and initialize their neighbors.
    links = lines.map(lambda urls: parseNeighbors(urls)).distinct().groupByKey().cache()

    # Loads all URLs with other URL(s) link to from input file and initialize ranks of them to one.
    ranks = links.map(lambda (url, neighbors): (url, 1.0))

    # Calculates and updates URL ranks continuously using PageRank algorithm.
    for iteration in xrange(int(sys.argv[2])):
        # Calculates URL contributions to the rank of other URLs.
        contribs = links.join(ranks).flatMap(
            lambda (url, (urls, rank)): computeContribs(urls, rank))

        # Re-calculates URL ranks based on neighbor contributions.
        ranks = contribs.reduceByKey(add).mapValues(lambda rank: rank * 0.85 + 0.15)

    # Collects all URL ranks and dump them to console.
    for (link, rank) in ranks.collect():
        print "%s has rank: %s." % (link, rank)

    sc.stop()

For those who enjoy scala, a more efficient PageRank example can be seen in one of Apache GraphX examples.

Unfortunately, by adding a damping factor to every connection, the PageRank adjacency matrix becomes a prohibitively dense, large matrix. It quickly becomes impossible to store in memory. Many have studied ways to improve the PageRank calculation. One popular approach is Adaptive Online Page Importance Computation (APOIC), which modifies the above system of equations to avoid holding the full PageRank adjacency matrix in memory. AOIPIC has been shown to approximate the PageRank score after a sufficient number of iterations. Ricardo Baeza-Yates et al. provide a great comparison of PageRank with OPIC (non adaptive, APOIC) and simple back-link count.

Query relevant prioritization

The goal of a query-based prioritization scheme is to rank unseen links in the priority queue as a function of relevance to incoming search queries. One specific implementation is diagrammed below, taken from Figure 4 from Olston's recent paper.

Impact-driven crawl selection steps

At a high level, Olston's goal is to identify "needy" queries - i.e., queries whose top K-results do not contain relevant results. The type of content needed to improve these queries is identified and used to prioritize uncrawled links likely to contain such needed content. Using this methodology, underperforming queries are continuously supplied with new content.

To achieve this, the first step is to build an index of "needy" queries; those queries that stand to benefit the most from new, diverse content. In the definition of Olston, a search query is needy if the average relevancy score returned in the top-K results is less than a predefined threshold. Defining search relevancy is a subtle topic deserving of its own blog post. I recommend Buettcher's book on Information Retrieval for a full history and quantification of search relevancy. In short, relevancy is commonly calculated as a function of number of returned documents, BM-25 scoring and click through rate.

Given an index of needy queries, a function can be constructed to measure the priority of a given uncrawled page, $p$. Olston writes this function as:

$$ P(p, Q) = \sum_{q\in Q} f(q) * I(p, q) $$

Where

Q is the index of needy queries
f(q) is the frequency of a given query
I(p, q) is an indicator of whether an unseen page, p, matches query, q

Pages matching needy, frequently searched queries will have the largest prioritization scores. Similarly pages which are either infrequently searched or contain rich results will be de-prioritized.

To measure $I(p, q)$, Olston describes an interesting, approach which seeks to jointly maximize the expected priority across all unseen pages and queries. Unfortunately in the worst case, this full expectation maximization is NP hard. To reduce complexity, Olston suggests w-shingling pages and queries. If the w-shingles of the query and page match $\rho$ fraction of shingles, $I(p, q)$ will be 1. Otherwise $I(p, q)$ will be 0.

There exists a catch-22. If a page has not yet been crawled, its content is unknown. However, the decision to crawl a page is biased by the content of a page. To approximate a solution, Olston describes an uncrawled "page" as the tokens generated from a uncrawled URL along with any surrounding anchor text. For example, in the following article from The New York Times, we see a link:

<a href="http://thedailyshow.cc.com/news-team/samantha-bee">Samantha Bee</a>

which can be tokenized into the following words

p = [news, team, samantha, bee]

We can write a custom tokenizer to split urls and anchor text into "words" by splitting on non alphanumeric words

In [1]:
def clean_get_unicode(raw):
    if not raw:
        return ''
    badchrs = set(['.', ',', ';', '\'', 
                   '\\', '/', '!', '@', 
                   '#', '$', '%', '^', 
                   '&', '*', '(', ')', 
                   '~', '`', '-', '{', 
                   '}', '[', ']', ':', 
                   '<', '>', '?', '\n', 
                   '\t', '\r', '"'])
    prevWasBad = False
    prevWasSpace = False
    ret = []
    for c in raw:
        if not c.isdigit() and not c in badchrs:
            if c == ' ':
                if prevWasSpace:
                    continue
                prevWasSpace = True
            else:
                prevWasSpace = False
            prevWasBad = False
            ret.append(c)
        else:
            if prevWasBad or prevWasSpace:
                continue
            else:
                prevWasBad = True
                ret.append(' ')
                prevWasSpace = True

    return ''.join(ret)
In [2]:
url = 'http://thedailyshow.cc.com/news-team/samantha-bee'
atext = 'Samantha Bee'

url_words = clean_get_unicode(url.lower()).split(' ')
atext_words = clean_get_unicode(atext.lower()).split(' ')
page = list(set(url_words).union(set(atext_words)))
print page
['http', 'cc', 'samantha', 'team', 'bee', 'thedailyshow', 'news', 'com']

We can clean the remaining tokens further by removing common html words (http, www, com, etc.) and filtering the remaining words against an English dictionary. Query tokenization is performed using the same code. A mapping from a query to the set of relevant pages for that query can then be constructed. Such mappings are referred to as needy query sketches.

There are many ways to construct needy query sketches. One simple methodology is to index the set of needy queries and their frequencies using a tool like Elasticsearch. Page tokens are then used as search query. The frequencies of the returned needy queries can then be summed.

Combining strategies

A final priority weighting can be determined using a linear combination of query-independent and query-dependent relevance scores. The exact weighting of the two can be determined through classic machine learning for ensemble fusion. The objective function is problem-dependent but likely is a function of overall search relevance and user click-through rates.

Questions or comments about the content? Join the discussion on Hacker News or email us at research@urx.com


URX is hiring! View our job openings to learn more and apply to join our team.

Adding to Our Core Values

URX's team has doubled in the last year, and it's been exciting to see our core values grow and evolve with each new member of our team. As a company, we have always been dedicated to creating and codifying values that are consistent across URX, and we put these values at the front and center of everything we do. With our growth, we’ve added new guiding principles that reflect our company’s values:

Ask Hard Questions  - We believe that it’s important to ask the questions that are hard in all dimensions.  This means challenging ourselves to solve not only hard technical problems but also social and ethical ones.  We welcome and encourage open conversation, and constantly seek new ways to challenge our own assumptions.

Think Holistically - URX thinks holistically about all of the things we create.  We believe it's important to know that nothing we do exists in a vacuum, and that every action we take has consequences beyond our short term goals.  We apply this thinking not only to our products, our users, and our customers, but also to our place in the community outside our front door.

Our other values are:

[We]RX - URX is a community of peers. Titles are formalities: all URXers are treated with equal responsibility and accountability.

Startup Hard - Doing things that you wouldn’t ordinarily do unless you worked at a very early stage company; often involves unusual feats.

Lifelong Learners and Educators - URX is a culture of learning. We are greater than the sum of our parts because we share a common love of knowledge and the motivation to build something significant.

Leave your Ego at the Door - Detach ego from the product of your work and let data make decisions. Don’t be afraid to fail fast and start over.


URX is hiring! If you’re interested in joining our team, view our job openings to learn more.


DeeplinkSF Explores The Brave New World Of Inter-Connected Apps

Today’s fastest-growing companies are expanding their footprint in the mobile economy by embedding their services into other apps: Take Uber’s “Book a Ride” button in Google Maps, OpenTable’s “Make a Reservation” button in Yelp, or Instacart’s “Buy Ingredients” button in Yummly.

Leading consumer apps aren't just building individual destinations for users -- they're building apps as services and empowering third-party developers to integrate those services through API affiliate programs and deep links.

What emerges is a vastly more connected app market, where users “flow seamlessly from need to need, serviced in each state by a particular application without having to pull back, choose a new app, and then dive back in,” says John Battelle, an author and founder of Federated Media.

Next month, Battelle will join a bevy of app builders and digital executives at DeeplinkSF, an invite-only event in San Francisco hosted by URX.

Sessions will explore unanswered questions about the future of inter-connected apps. Will a single interface emerge where we consume all our apps? Will we need to download an app to utilize its services? How will the economics of app partnerships evolve?

The event features a panel discussion on the future of mobile discovery, with John Battelle, Rich Miner, the co-founder of Android, and Aparna Chennapragada, the Director of Product at Google Now.

Rich Wong of Accel Partners will moderate a panel discussion themed around scaling via app partnerships with growth leads from Uber, Yelp, Yummly and Spotify.

In addition, the event will feature live demos from Branch Metrics, URX and Workflow.

Among the topics we plan to explore:

  • Leveraging APIs to elegantly integrate mobile services into a common interface
  • The economics behind today’s app partnerships
  • Cross-app conversion tracking
  • …and much more

Join attendees from Pinterest, Nike, Apple, Hotel Tonight, Lyft, Airbnb, Facebook, Twitter, OpenTable and more.

Request an invite on the conference page at deeplinksf.com.

The URX Debrief - March 19

Mobile has a fragmentation problem

John Milinovich of URX discusses the potential of deep linking and what it could mean for Google and the rest of the local technology industry. —Street Fight

Bitly announces deep link platform for marketers.

Marketers provide their link locations to Bitly and it handles the link re-direction, sending users to either the in-app location, app store, or desktop page.—Bitly Blog

An open Google Now is about to make Android super smart

At SXSW, Google announced that Google Now will eventually open its API to all app developers. Google Now could become the Android dashboard that replaces your first wall of apps by creating a timely, streamlined digest of the information and services contained within them.—Wired

Ushering in the “Age of Context” in Mobile

In the evolution from the current Era of Mobile to the future Age of Context, thesupercomputers in our pocket evolve from information delivery and application interaction layers, to notification context-aware action drivers. —Wired Insights

Connecting wearable devices to the Physical Web

URX lead engineer Jeremy Lucas spoke at Wearables TechCon last week. Find out what UriBeacon and the Physical Web have to do with deep links on our blog.—URX Blog

Connecting Wearable Devices to the Physical Web

Last week at Wearables Tech Con, URX lead engineer Jeremy Lucas spoke on “Connecting Our Devices to the Physical Web”. 

 

The Physical Web is an effort initiated by developers at Google to extend the superpower of the web so that you would be able to walk up to any “smart” physical object (e.g. a vending machine, a poster, a toy, a bus stop, a rental car) and interact with it. Smart objects would be enabled to broadcast a specific URI that users can detect with their device (phone, tablet, wearable, etc.)

Google developers define an open standard, called UriBeacon, which is a wireless advertisement format for broadcasting URIs to any nearby smart device over Bluetooth.

An interesting use for the UriBeacon is to broadcast semantic information so that it can be accessed from any device. Semantic information on the pages of a website can be associated with UriBeacon links for mobile apps and deep links. A restaurant, for example, could broadcast a Bluetooth packet that lets people swipe to view their hours, check out reviews, or book a reservation.

To demo this, Jeremy spoofed a UriBeacon for the restaurant The Front Porch from his laptop. When his Moto 360 detected the beacon, it showed a screen where Jeremy could swipe between options for taking action, like reading reviews for the restaurant and making a reservation.

 

Jeremy also built an app for the Moto 360 that opens an app deep link from a voice command, via a query to the URX API. Watch a demo video of that app here.

Mobile Has a Fragmentation Problem — Here’s the Technology That Could Fix It

This interview originally appeared in Street Fight

More than a decade ago, Google‘s search engine solved one of the most frustrating characteristics of the web: its fragmentation. Now, the mobile industry faces an even more striking crisis as mobile users spend more and more time in often hermetically sealed applications.

The problem has shaped the trajectory of the industry. A deluge of vertical-specific applications now aim to not only help you find and discover content but also evaluate and buy goods and services. And without a central search engine to route traffic, application developers are spending billions of dollars to acquire new customers spawning a multi-billion dollar app download industry that now accounts for a large swatch of the revenues of Facebook, Twitter and others.

Steven Jacobs, deputy editor of Street Fight caught up with URX's John Milinovich to talk about the potential of deep linking and what it could mean for Google and the rest of the local technology industry.

SJ: Tell me a little bit about the threat which the fragmentation within the mobile ecosystem poses to some of the larger Internet companies today?
JM: When you look at the biggest difference between desktop and mobile usage, the biggest thing is that you do not have to install a website to view its content. For Google, its biggest existential threat has always been vertical search. You go to Amazon to find that pair of red shoes instead of going to google to end up on Amazon. All of these vertical search engines — Yelp included — are often times much better at fulfilling that user intent but those are also the highest value queries for Google.

Fast forward today, the fact that you have install and app and then go into the app to figure out what’s inside it means this is an inherently distributed or vertical search-world that we live in. You’re not going to Google to search for content in apps. The big challenge for Google, whose dominance relies on being that central place where people go to find information, is figuring out what to do with mobile traffic

Is deep linking simply meant to replicate the structure of the web? Is this a correcting of a problem in the mobile environment?
That’s part of it. But I think deep linking goes deeper than just replicating the web. There’s so many things that mobile devices, and applications, can do that the web just can not. There’s so many more signals that are at a developers disposable when they’re building an app and so many things that are sensitive to a user’s context that an app can do that a website could never complete. A lot of the opportunity is to utilize and exploit the things that make mobile unique as a form factor to make these user experiences better.

I think location is the most important thing that mobile is sensitive to that the desktop and the traditional web world is not. There’s such as sensitivity between me and my device, and my device and my location. The ability to take that location as a signal and understand what context actually means as another layer of understanding of what i’m actually looking for was not possible before.

In the past few years, we’ve seen a handful of vertical-specific mobile applications build experience that pair discovery with fulfillment. Do you see that as something endemic to mobile or a momentary response to a broken mobile ecosystem?
The reason people yell from mountain tops about the need to build companies around different parts of the purchase funnel is a reaction to the fact that it’s a broken experience and still nascent space. If you’re able to understand that a user is in search-mode based on the types of content they are viewing or the words on the page, you can imagine how you can connect the top of funnel research mode to down the funnel fulfillment. There’s all of these interesting signals that exist just implicitly based on the apps which are installed on your phone.

If you can begin to surface not only the apps themselves, but what’s inside those apps, to what a user is doing at any point in time — that’s a huge opportunity. That way you can compress the funnel in a way that people have been trying to do for your but it has not been possible especially on mobile.

URX and Google both index information, but URX does not expose that information to the consumer through search. Talk a bit about the strategy here.
The opportunity for us is to become that connective fabric to connect apps together. That means both the organic ability to link app A to app B, but also the organic ability for app A to pay app B for conversion. The apps that succeed today are the ones that can do one very specific thing. You cannot do more than one thing if you expect to grow hundreds of millions of users. But there’s all of these adjacent actions, that because of the paradigm that apps are built today they cannot be housed in the same experience.

Imagine deep linking succeeds. Is there something unique about mobile where a single company, say Google, would not have dominance search on mobile to the same extent as the web?
That question is in much more in the hands of the operating systems than in the hands of developers. As a user, I would never open one app in order to search for things inside another app unless that thing was ten times better than Google. In my opinion, to build a ten times better search engine than google is a losing prospect when it comes to consumer facing search.

It’s something that’s much more up to Apple, in terms of what they are going to do with spotlight search, or to android, as Google continues to push forward on app indexing. But until then, as a user i will still have 45 apps on my device and i’ll still go directly to them to find something in order to do what i want to do.

Let’s talk about Google for a moment.
One of things that Google has focused on is how do they take their money maker, AdWords, and allow that to utilize deep linking. That’s a huge opportunity for Google, but it’s also something that will be a multi-year endeavor. They need to make sure it’s ready to roll out internationally, adopted by their sales team, et cetera.

There’s a lot of things that Google is doing to take its current properties — mainly, Google Now and Google Search – and drive user back into apps. But these days, they’re much more focused on how to make it work on their own properties before figuring out how to make it work for third-party developers as well.

Deepscape 3.0

We took a fresh look at the industry landscape (or “Deepscape”) focusing on companies using deep links to create new, innovative mobile experiences.

We structured this version with the goal of clarifying deep linking developments in five specific areas:

  1. Developer Tools - adding deep links to apps and exposing them publicly
  2. App Partnership Tools - helping companies connect their apps with deep links and create monetization and acquisition opportunities
  3. Marketer Tools - using deep links for user acquisition, re-engagement and app monetization
  4. Mobile Apps with APIs - companies that have built their own APIs for direct integrations
  5. Consumer Search companies building new consumer search and discovery experiences

Details on companies are below. We would love your feedback!

Developer Tools

Open Source Frameworks

Frameworks that will help you set up deep links in your iOS and Android apps.

HTML Markup for Deep Links

HTML web code that allows you to more fully leverage deep links and make your app content discoverable.

App Partnership Tools

  • URX - works with apps to build partnerships at scale with an API for linking into the most relevant app based on context of the user and relevance
  • Button - connects specific apps together with loyalty and commerce
  • Facebook App Links - static deep links into other App Link partners
  • Direct partnerships - apps that have built APIs for direct integrations (e.g. Spotify, Uber, OpenTable, etc.)

Marketer Tools

Metric and engagement tools that help marketers leverage deep links in their social, email, organic and paid campaigns.

Consumer Search

  • Apple Spotlight - search for content on iOS devices
  • Bing - search that layers on rewards and user experiences
  • Google now - search that lets you preview app content before downloading
  • Quixey - find new apps and search existing apps on your device
  • Relcy - “page-rank” style mobile app search
  • Vurb - new consumer search app
  • Wildcard - preview app content before downloading

Happy deep linking!


The URX Debrief - March 9

Deepscape 3.0 - A new look at the industry lanscape

We took a fresh look at the industry landscape of companies using deep links to create new, innovative mobile experiences. We structured the newest version of the "Deepscape" with the goal of clarifying deep linking developments in these areas: developer tools, marketer tools, app partnership tools, mobile apps with APIs, and consumer search. —URX Blog

Branch Metrics raises $15M to scale deep linking technology

Branch Metrics links bring users to deep-linked content after the install. The shortened links can also be fully tracked, allowing customers to measure clicks, installs and other down-funnel actions, including organic growth referral programs, invite link clicks, sharing features, and more. —TechCrunch

Vurb launches app to reinvent search on mobile

Our solution revolves around the idea of the Vurb Card. It’s a portable, interchangeable, and intelligent medium that connects you to information and relevant apps and services. By bringing together Cards across multiple categories (e.g., places, movies, music), we’re helping you get things done – all from a single place.—Vurb Blog

Quixey raises $60M for mobile app search

Quixey confirmed that it raised $60 million in a strategic Series C1 funding round led by Chinese e-commerce group Alibaba. Quixey develops technology to connect people with new applications and helps them discover the content within mobile apps.—TechCrunch

Google will boost relevant app content in mobile search results

Starting today, we will begin to use information from indexed apps as a factor in rankingfor signed-in users who have the app installed. As a result, we may now surface content from indexed apps more prominently in search. —Google Blog

What is the natural exit points for apps?

Once we accept that our users will eventually leave the app anyway, and there may be upside in helping them through this process (both monetary and otherwise), the next question that would occur to a logical developer is “where should I put these outbound links?” Specifically, what’s the natural exit point for an app—Wired Insights

Travel companies are building smarter app integrations with deep links

Although travel transactions are as simple as few taps, getting consumers actually into an app to book remains companies greatest challenge. Deep linking solves that by creating efficient channels from demand to purchase through industry partnerships. —Skift

 

The Future of App Partnerships

Great app partnerships create great user experiences as well as new business opportunities for mobile developers.  We’ve designed our API to allow developers to natively integrate connections to other apps into their own app.

Visit our new URX Labs page to see some of our ideas on how context can be used to suggest relevant and useful actions to users.  Two of our awesome partners, Fitocracy and Happinin, are live today with great examples that bring new meaning to the term “native”.

Search Widget

Fitocracy added a button in their workout app that opens a music search page where users can search for songs, artists, or playlists and are then taken directly into the music app to listen.

Action Sheet

Happinin added a music button that appears for each band that is highlighted in their app.  On click the user is able to choose in what app they want to listen and the URX API is used to link them there.

Check out our Labs page for more inspiration including ideas using push notifications, multiple actions on a page, and creating cards from results.