When Will Request Platforms Evolve to Recommendation Engines?

The visual media industry is good at content aggregation, yet has had historic challenges in customer retention. Once a service focused intently on solutions, it became broadsided by online sharing and a new class of creators that reflected a shared economy. Rapid focus on rights protection drew attention away from marketing development, and a game of catch-up has been in play ever since. What has been clear throughout, however, is that the core use cases supporting the industry haven’t really changed a whole lot.

Along the spectrum of content, the most simplistic need is the readily-available and cheap: quick use for a corporate blog, or short-duration non-campaign use can be easily obtained from volume stock platforms or begged from anonymous sources at low to no cost (other than time). The actors involved are reflective of this use, typically a single marketer.

At the other end, where complexity factors into everything, content acquisition math becomes tougher to solve. The campaigns are larger, far-reaching, and longer. There’s significant budget, many actors to appease, and in general much is at stake. With money come options, and more option means more decisions, more inventory, more potential sources.

While supply has struggled to keep pace with demand, there are no lack of products on the market to help solve content strategies. In fact, many are turning to technologies such as computer vision to help build enhanced services upon their existing business models (Shutterstock’s reverse engine search being one). A few, like ImageBrief, Foap, Snapwire and now 500px, have implemented request platforms, but with such few imitators among licensing incumbents it’s questionable how valuable a service this is.

Request platforms invite brand owners to publish their need (loosely in the form of a creative brief) to their community, reap engagement, and harvest content for consideration. Those brand owners dictate usage rights, price paid to the winning community member (or in some cases, a prize, as the process is essentially a contest). As hosts, the request platform receive all the content submitted as new inventory to market elsewhere – in itself a content acquisition strategy that can help fill much needed areas.

As business models, request platforms have been around for a long time, before cloud tech enabled offsite hosting and digital distribution had significant hurdles. Even with the strides into big data and computer vision, why hasn’t the model improved beyond what seems to be simple search and retrieval?

More information exists online about how content is being consumed and by whom – metrics that can help define the success and potential optimization of campaigns. Beyond marketing analytics, actual rights information and other risk mitigation data can be obtained, widening utility beyond campaign efficacy and into digital rights management. As a service this has certain value with campaign/brand owners, but how else can request platforms – or any visual media licensor – move the needle on their product and start to anticipate the needs of their customers?

Auto-parse creative briefs:    Online forms are a hassle, and are an impediment from someone using your site/service. Why not allow for briefs to be uploaded in their original form, parsed and automatically generating a set of applicable results in inventory? Briefs can contain a high degree of conceptual information, which has a good chance of being filtered out through a normal search. By subscribing to the largest data set applicable to a single query, you’re guaranteeing more relevant results.

Expand the inventory base:   While perhaps antithetical to most licensors who are primarily concerned with aggregating inventory on their own platform, providing clients with options where platforms fall short addresses retention and service. Why not open the scope wider, include competitors and partners, and develop a product that delivers option value? This is the age of referral traffic and micropayment processing – broad collaboration, channel agents and reselling is not a foreign concept to this industry.

Avoid myopic metrics:           User data derived from one’s own platform is insightful, but it’s still a single view of characteristics and trends that at best reinforce successful tactics. Why not apply a multi-dimensional view, and incorporate broader data sets? Invest in historical and forecast data that are macro influencers, and start to map future state requirements from the community. Basing future behaviors off of past without the inclusion of environmental factors is missing the narrative thread.

Provide follow-up tracking and analytics:        Post transaction, customers have little to no interaction with the content source, thus cementing the ‘transactional’ nature of the relationship (and underscores core retention issues). Give them more reasons to engage, like options around campaign reach (through computer vision tech) and bundling aggregate metrics for comparative views. Sharing knowledge is a first step toward recommending future actions.

Very few companies are looking at full lifecycle content management that encompass acquisition through analytics (Adobe is the most bullish through its recent acquisitions), which is where the bulk of previous licensing business has migrated to. Those request platforms that seek to solve UGC harvesting target only part of the problem faced by content marketers – viewed as true consumers, content marketers are in the position of formulating the questions around what they need next; for platforms that seek to fulfill these needs, endeavoring to answer the questions before they’re asked is difference between their current model and a recommendation engine.



Online Publishing Growth

(originally posted on IMGembed’s blog 2014)

Online publishing has become so commonplace, and tools so ubiquitous and available, that it’s simple to forget how much of a recent explosion this was, how broad it has reached, and how it continues to grow.

Blog use is on the rise, as both Tumblr and WordPress statistics confirm. Within the past 3 years, Tumblr has seen its number of blogs increase by over a factor of 10, from 17.5 million to over 181 million. The cumulative amount of posts (within the same period) increased by almost a factor of 17 from 5 billion to 82.9 billion. WordPress has seen its page views increase over twice the amount within the same timeframe, and is averaging over 14 billion page views per month in 2014.

Driving this growth is content creation in both business and non-business capacities. In general trends, creation of online content is now the cultural norm with Gen C (“generation connected”), with over 90% creating online content at least once per month. This engagement spans generational boundaries (35% of Gen C is over the age of 35), and across nationalities. In the US alone, over 78% of Gen C engage in the curation of content online, and over half of global users do the same. The process of selection, sharing and use is both habitual and commonplace.

Trending in business looks equally compelling, with one-third of US businesses planned on increasing image use in their marketing in 2013, and over half planned on increasing their content marketing output that is inclusive of imagery (blog posts, articles, social media). Clearly, the adoption of content marketing by businesses from small to large has taken firm root, and the growth of image use within content marketing contexts will expand. Image consumption by businesses is going to continue to grow across industry verticals and will reach deep into various levels of the product, sales and marketing cycle, driving essentially ubiquitous integration of use.

Images sell ideas in ways that text cannot. Whether for a commercial context, or editorial, imagery has been proven to increase engagement. As online publishing grows, so shall opportunities for image use.

Where is the new image inventory going to come from to serve the marketplace? From those who take an active part of creating and sharing content online. Business models that allow for participation by both creators and publishers, connect them, and focus on a seamless transaction with significant value add in the process will benefit in the new content economy.

Content Marketing as a Photo Channel

(originally posted on IMGembed’s blog 2014)

As content marketing pushes forward in 2014 through both native ad growth and non-native content channels, marketers will be required to understand the impact of their efforts. A key component within the suite of business intelligence metrics is use tracking. Native ads can deliver metrics back to marketers directly from their partner site, but what about non-native channels, specifically sharing and re-posting?

Photography is a massive greenfield for brand marketers, all of whom value brand participation and sharing above all else. The world’s leading brands seek to engage consumers through photography – no other medium has the immediacy of message in a foreshortened cycle of consumption, and can offer the type of intimacy advertisers seek to deliver upon. More channels are available to deliver photography than ever before, and no other platform or media is as engaging.

How can photographers tap into these channels? By making available their works through platforms that promote the sharing and tracking of images – engage potential marketers directly, and gain exposure through use and dissemination. Traditional sources for imagery – like commercial and editorial licensors – can only penetrate these markets so far, as content marketers seek new sources that offer the flexibility and uniqueness that the web offers.

Native Ad Growth in 2014

(originally posted on IMGembed’s blog 2014)

Content marketing continued its expansion in 2013, disrupting traditional online advertising. The elements aligned perfectly: consumers became less engaged with banner ads, which became increasingly an arcane vestige of newspapers and old technology; likewise, the demise of traditional publishing volumes impacted the ability for new content – at least the ability to increase the supply for the demand of the digital age.

As businesses quickly adapted and channeled resources to generating content, traditional press took notice. Already, Buzzfeed and Huffington Post were highly successful in adopting a native ad strategy. Crafted in a targeted way, native ads can deliver a seamless integration into existing editorial parameters, have a higher CTR and much more measurable in ROI than traditional ads. Nearly 75% of publishers are now offering native ad integration, and even NY Times itself is finally caving in.

Of course, with success comes scrutiny. Lack of transparency for consumers is drawing discussion of industry regulation, but that shouldn’t slow things down too much. Blurring the lines between paid content on news media, and news media on big business (as supported by firms like NewsCred), is a by-product of both advertisers and old media adapting to new technology and behaviors.

With the growth of content creation, opportunities for photographers will expand. Image consumption will continue to be decentralized from major publishers and agencies, out to businesses of stripe and color – anyone who is actively looking to publish online. 2014 is shaping up to be a good year for content, and no story or communication is complete without photography.

A Case of Online Attribution

(originally posted on IMGembed’s blog 2014)

The recent decision on the Morel case, in which Editorial photographer Daniel Morel brought copyright infringement and DMCA violation claims against the AFP and Getty Images, was much more about the systematic failure of well-worn business practices within photography, rather than willful infringement by the defendants. While AFP and Getty took the stand, and were certainly culpable, it was more interesting to understand who was not there: both the person that took the images from TwitPic and republished the photo under their name, and TwitPic.

From TwitPic, AFP came across the images – all of the Haitian earthquake in 2010 – and used while crediting the wrong photographer. Getty picked them up from AFP’s feed and made them available to news publications; both licensed them directly to publishers, with the wrong attribution. Daniel Morel never received payment, and tried getting his images removed from distribution once immediately made aware.

By then, the distribution system had gone through its mechanizations: AFP assumed the images were attached to the right creator, and Getty assumes anything AFP gives them is legally “clean”, that all rights were cleared by AFP. The publications who license from Getty, in turn, make those same assumptions: that through the whole chain of operations sufficient due diligence was made and rights were secured by all parties.

The fly in the ointment (or wild card), however, was TwitPic and its agnosticism toward content and attribution. Despite their terms of service being clearly violated, there was nothing preventing the infringing behaviors on TwitPic’s platform. As a distribution model, it’s great for photographers to quickly gain access to a wide audience on breaking news, but does little to nothing around securing attribution for broader re-publication.

Attribution is a core value to Imgembed’s platform, and allows for content creators to hold that association through online use. In a hypothetical use case, a photographer like Morel could use Imgembed as one of his/her photo channels and realize the immediate benefits of embedded attribution, online use and monetization of their imagery, and tracking its use across the web.

There’s a wide disparity between traditional media conduits and their aggregation/distribution practices, and how photographers are capturing and sharing their imagery – the expectations on both sides, and through the Morel case, have revealed how wide this gap is. As a system, Imgembed is an end-to-end solution that brings together the parties that mutual benefit from online use, because photographers and publishers should meet somewhere other than court.

The Content Marketing Machine and Photo Supply

The sharp rise in content marketing by corporations has demanded a commensurate rise in content acquisition from traditional sources, and has opened the door to niche platforms and services such as Thismoment and Newscred. The impact to the photo industry has been mixed. On one hand, image licensing agencies – and specifically large-scale aggregators like Shutterstock – have benefitted from their inroads into corporate clients, expanding and servicing a segment that is marketing-focused with high volume transactions. On the other hand, UGC (corporate-generated) and platforms (like Chute) that harvest UGC have challenged traditional sourcing.

Who better to get an oversight on these recent trends than someone at a Content firm? Kristine Stebbins, VP Strategic Services at Filter – a company that focuses specifically on digital content for a range of clients including Microsoft, Nike, and Google – provided some insight.

On the growth within the content management industry…

[There has been] a significant increase in content management services over the past 2-3 years, [with concentration in] B2C consumer goods, luxury goods, high-ticket items especially for ecommerce digital experiences and technology. It is difficult to quantify exact numbers, but I do [know] that all of my clients either already have a CMS or [are] considering a CMS to support their content operations model.

On in-house acquisition strategies, challenges maintaining rights…

I find that clients, who do not offer content as a product, really struggle with [acquisition].  Specifically, they need the content, but are chronically under resourced to acquire, obtain or create this content – so they do attempt to grow “in house” teams, but find it difficult to sustain those teams given the ebb and flow of need.  That is a way that Filter can support these teams, given our flexible resource model we can scale a team quickly to meet a content need and then wind that team up when the need diminishes. 

We see a dramatic increase in the need for visual media – defined as pictures and videos – of all sorts.  We work to acquire content and obtain and negotiate rights for content usages on behalf of our clients.  There are always challenges in this space, especially given some of the challenges of connecting rights management criteria directly to the asset itself.  There are some platforms/systems that can help clients track usage rights, but it remains a time intensive process and it is a critical issue given that legal rights need to be adhered to.  We also see a ton of latent visual content in our clients DAM systems often times with no direct information about rights and one of the big challenges is going be [in a timely manner] to do the forensics necessary to track down the appropriate usage rights and information. 

On the use of UGC…

A few years back UGC was the perceived panacea for clients, as they believed that the majority of content they really needed would be created by their users.  We now know that is not the case.  There are some areas where UGC content can be very beneficial – especially in categories where “support” content is important – and clients can create a platform where customers can support each other with their questions.  That being said, there is a place for UGC content to be integrated into a content marketing model, but it needs to be structured, curated and resourced appropriately to ensure success.  Typically, I see clients under estimate the amount of resourcing needed to ensure UGC content is actually valuable for distribution.

On the role of media licensing companies, and how they can address the needs of the content marketer…

One of the biggest impediments for the photo industry – still – is being able to get the large photo files to source destinations quickly from remote locations, and having access to cloud services that enable the photographer to more easily upload these images to the cloud for faster editing, art direction and decision making on usage preparing for download to appropriate channels. The photo editor needs to be a photo expert, story teller and “content engineer” so that they can see the photo, understand the story it conveys, and is able to quickly tag, identify and curate the content in an appropriate manner so that it can be quickly accessed or distributed to appropriate targets or channels. 

They could make it easier to buy content packages based on target audience needs that are complete packages with various content components, and that allow for redistribution on clients’ sites. If the content could “automagically” vaporize when rights expire, either by date or other criteria that would be super awesome! Time is of the essence here, and a combination of capability and technology to support the rapid response is necessary to be relevant in the speed game.

Firms like Filter have long acquired content from traditional photo sources, and continue to do so on behalf of their clients, but there’s a clear disconnect between supply and demand; today’s content marketers are seeking new supply channels that are sensitive to their requirements, and that are reflective of the small-use/large volume trendlines. While UGC might not be the ‘panacea’ (rights issues, curation), its hassle-free, passive approach to end users still resonates, and doesn’t demand byzantine licensing, pricing, and distribution models. Many industry leaders are already supporting content marketing platforms (like Getty/Newscred), and quickly filling existing and new opportunities, but it’s clear that the map of digital distribution has grown far outside the visual media licensing community.


Written by Comments Off on The Content Marketing Machine and Photo Supply Posted in Uncategorized

Embedding Images: A Contextual Ad Delivery Model?

Embeddable images through online publishers can be a tactic for extending the reach and brand of publishers, who are the direct recipients of its traffic-building and presence. As you allow sharing, your content network grows, as do link-backs and overall web-imprint of a publisher’s site and branding. More often than not, the image owner and publisher are not the same entities. What’s in it for the image owner?

UGC aggregators that welcome the outbound embedding of images from their platform typically expunge any metadata from images during the upload process, which might contain useful information on the images’ provenance. Rightful copyright, web links, subject and identification information can help tie-in the image owner to an embed network that they might find themselves party to. When these are absent, only the publisher benefits, marginalizing the very actor that creates the value – the value of end-user to publisher, but more importantly the high-growth network of embeds that the publisher now owns.

While the practice of building up a network of embedded images is no easy task (inventory worth sharing plus traffic), a network as a channel for distribution is well worth the trouble. As a closed system, embeds offer the ability to deliver info to the third-party publisher’s site (rights permitting), namely advertising. In-image advertising has been with us for many years, and recent consolidation with Luminate’s sale to Yahoo! removed a significant provider of in-image ads from the competitive landscape (publisher access to in-image ads were turned off on October 1). As Yahoo! seeks to launch its own solution and leverage its considerable ad inventory, coupling Yahoo!’s truncated Flickr monetization strategy with it is hard to overlook.

The In-Image Ad industry isn’t just Luminate, as GumGum is now the clear leader, but both have fostered their own competition. Vibrant Media is modeling their success off of GumGum’s, and like them, their core strength is ad aggregation and publisher acquisition. Popmarker and Imonomy are non-US start-ups seeking a toehold in the US market, but the barriers to entry aren’t only in publisher acquisition. There are a limited number of publishers who have the rights to advertize over the images on their sites – let alone their embedded third party publishers. The real map to monetization lies within the open participation of image owners (or someone who has those rights in aggregate, like Getty) partnered with an in-image ad solution that is above all, relevant.

Relevancy is achieved though publisher context – and not the context of the image – which is something often overlooked by those within the in-image ad industry. Many place their relevancy metrics on how closely an ad matches the subject of the image (again, something expunged metadata imposes barriers to), but there’s fleeting accuracy. Subjects and the content of an image – programmatically – don’t always convey a narrative, or underlying concept, that it’s trying to convey. When relying upon content of the image solely, the context between image, ad and publisher is lost.

Some are bucking visual recognition applications altogether and going a tried and true route of big (meta)data to contextualize their in-image ads. Netseer has developed an in-image ad solution that matches the ad against the content of the page the image is placed in – an approach that adheres closely to publisher intent, and therefore greater relevancy between ad and image. The in-image ad model, as a click through proposition, lives and dies on relevancy to the end user. The understanding of end user intent and preferences is much more of a big data problem than image tech problem; luckily, publisher sites offer a wellspring of information, and relevancy between the image embedded in the page and the content of the page is already predetermined (and still largely human-driven).

As embedded image networks proliferate, it’s incumbent upon the owners of those networks to minimize impediments to sharing while finding new ways to monetize usage. Removing any type of pay wall is critical to adoption, given the type of end user typically involved; also critical is employing a solution that can incentivize continued and expanded use by publishers, while re-numerating image owners and advertisers alike.

Written by Comments Off on Embedding Images: A Contextual Ad Delivery Model? Posted in Uncategorized

Embedding Images: The Promise of Proliferation, The Tension of Control

The practice of allowing visual media to be embedded on an external site by an anonymous user has seen widespread adoption (thanks, YouTube), so that any site in the business of publishing content uses it. By allowing readers to share the video/photo itself, and not a page link, it incentives use and reuse by offering bespoke contextualization of the object. Moreover, it gives the source publisher the ability to generate outbound links and traffic data in a more meaningful way. With the adoption of embedding by lay online publishers commonplace, many photo startups and incumbents within photo tech and licensing have exclusively focused on embedding (like IMGembed) or integrated it as another solution for their clients (like Yay Images’ “streaming” service and Getty’s embed program). While there’s clear demand, there’s ambiguity over how best to monetize using an embedded feature: per-impression fee, subscription, in-image advertising, data mining? Interestingly, photo-tech seems much more interested in plotting new paths than its retail-focused counterparts.

Houzz, the Pinterest-inspired home-décor community, recently secured $165MM in new funding. The start-up doesn’t hide photo sharing as being critical to its success – it’s right there in the global nav, immediately next to their logo: “PHOTOS”. Their embedding feature on every image (over 4.2MM) serves a clear purpose: to link back to Houzz.com. Embedding images from Houzz is simple and requires no registration. Embeddable codes are served up with every photo; Houzz delivers the image from its server, with a link back to the page on Houzz where the photo is published. There’s no credit line on source from Houzz on embedded images (and, once embedded on another site, you can easily save the photo/orphan).

Given their reliance upon photos for their business model, one might expect a bit more intelligence for how embedding is leveraged beyond just an anonymous share out and link back strategy. Houzz could employ a metadata strategy that better promotes both the content of the photo and its source (look to the Getty/Pinterest partnership), thus strengthening their brand and connectivity with users on the supply side as well as the user side. Also, utilizing visual recognition technology could create a more powerful selling platform, and allow Houzz to use the photo as a distributed point of sale among its retailers and partners. Embedding, when viewed as a vehicle for a more robust and deep engagement, can take on added dimensions.

Perhaps the most interesting use of embedding technology for photos – at least for those who aggregate and license – is in NewsCred’s recent launch of their Image Editor. NewsCred is in the business of creating and licensing content for content marketing purposes, and images are in all if not most of the content they license and distribute to other companies. Their platform tracks use and provides analytics, but their core service is as an aggregator and licensor of content. While NewsCred has for a while had multiple sources for visual content, it emphasizes as part of its marketing for the Image Editor access to “12.6 million stock images from Getty Images and our new partner Shutterstock as well as 28.1 million editorial images”. Clearly, putting it in the hands of the user is a lean toward customization of their service, but it’s also another pivot away from managing downloads on desktops. As part of a contained experience in using photos in content marketing, it benefits from use analytics which NewCred’s suppliers, NewsCred, and its end users can utilize.

If the practice of embedding is as much about control as it is ease of proliferation, the creation of “content on a string” by NewsCred and other content creation/management companies (as opposed to pure aggregators like Houzz and Pinterest) leans heavily on the control premise. Of course, the big differentiator lies within rights: aggregators who open themselves up to crowd sourcing have real limitations on how they can interact with that content. Pinterest drew the ire of the photo licensing industry after its launch by virtue of its inability to support accurate rights holder information, or even proper links, and was viewed as a massive orphan work generator (it has since made product improvements that address this). Where UGC imposes barriers for photo tech and aggregators, photo licensing companies like Getty hold the advantage with a vetted inventory. Still, the strategy is one of proliferation and not one of containment – but shouldn’t it be both? Embedding can provide clear value to the user (hassle-free access, customization) and to the rights-holder/publisher (tracking, analytics, sharing). Building those channels of use, through proliferation, is of high value. How that network is exploited becomes the central question to monetization.

Oh, What A Difference (Or Not?) 5 Years Makes

Stock image consultancy firm Visual Steam recently published a summary of their 2014 survey of US art buyers in stock image licensing. It outlines some of the major trend lines from the previous year (continued pricing pressure, use migrating to online and away from print), and provides insight into buyer habits across sourcing, pricing models used most, and “top of mind” destinations for sourcing images (it’s still Getty’s game, but Shutterstock continues to nip away).

Comparing trends between this year and last might reveal glacial-type movements among art buyers, who largely have not changed their habits over 12 months. What about 5 years? The publication Graphic Design USA, for many years, has been publishing its own stock visual survey (itself sponsored by a commercial stock licensor). Have 5 years of buyer habits changed all that much, and what do their habits reveal about trend lines and the stock industry’s response? Similarities between GD USA’s 2009 survey and Visual Steam’s 2014 are close enough for comparison.


The use of motion has increased greatly over 5 years, according to those polled. In 2009 the amount of buyers licensing motion was 35% — today it is 73%. The amount of producers and licensors of motion have not commensurately increased within the stock industry, so where is increased demand met? Is inventory finally being exploited across Getty, Pond5 and others, or has the increase in use been met through assignment?

General Use

41% of buyers polled in 2009 said they use stock more than in the previous year. 60% of buyers polled in 2014 said they expected to increase their use in the coming year. While this comparison is reality vs. forecast, it does not at a general volume increase year over year, which should be aligned with ad growth. However, sales volumes do not equal revenue volumes. To further illustrate the eclipse of digital over print, almost all of those surveyed in 2009 used stock for print campaigns, and today it’s roughly half.


To which pricing and licensing models does the money go? RF licensing still sees the lions share, which is little surprise. RF saw over half (54%) of what was spent in 2009, while 2014 increased to 59% over RM. What was not tracked in 2009, but relevant today, is Free use – 13% accounted for total licenses acquired in Visual Steam’s survey, making the rise of direct to photographer sourcing by buyers a powerful theme. Certainly, Flickr, Creative Commons, Google Images, and outside distribution and sharing has accelerated this trend. Spending little, if anything, is still a major driver in content sourcing: only 23% said that quality trumps price every time.


Perhaps a trivial difference, while most all sourced their imagery online in 2009 quite a few were reliant upon print catalogs and CDs. While GD USA’s poll doesn’t give us buyer preferences around where they source, Visual Steam’s does, and Getty is still top of mind among stock licensors. Getty and iStock accounted for well over half of those who were asked of an immediate “go-to”, with Shutterstock not far behind. Corbis and Veer are very much considered tier 3. These findings certainly reflect market share capture. A distant, yet powerful, source was Google Images, but what remains opaque is whether this a front door to industry licensors who benefit from tagging and ads or a method for sourcing outside of stock licensing (and what is the differential?).

Buyers seem to have grown accustomed to subscription and trolling micro sites for cheap RF in the past 5 years, since questions in 2009 (“have you used a micropayment site?” and “have you used a subscription service?”) seem as antiquated as print catalogs and CDs. No doubt, with the move by iStock to go up market with its Vetta collection (and with Shutterstock mimicking the same in its recent Offset), buyers are challenged to break old prejudices even if in practice it was a shell game of content by the licensors.

Will we see the same prejudices – this time with user-generated content – be defeated in 5 years time? UGC was raised as a question in 2009, and over 1/3 of respondents said they’d used UGC at some point in a campaign. Oddly, Visual Steam’s survey did not cover UGC. On the tip of the tongue in 2009, today it remains as fragmented and immature a market as ever, with many startups and incumbents seeking traction and market acceptance as iStock did. What most photo tech companies who venture into monetizing UGC for the stock buying community consistently fail to grasp is that quality still is paramount (quality implying provenance – or assurance of rights), and that a simple exercise in aggregation does not account for the convoluted landscape built on the preferences and practices of a fickle market. Is 5 years really that long a time to solve the problem?

3 Major Reasons Photo Tech Needs to be Concerned About Rights

The recent explosion of startups devoted to monetizing photography have revealed certain diversity of approach within the photo tech ecosystem, where business models are targeted largely on accelerated aggregation of imagery and either monetization of the audience (data, app charge, etc.) or of the images themselves (advertising, print on demand, licensing/use). Many, like Chute, provide tools for the aggregation of UGC to supplement their campaigns, while others, like 500px, focus on fine art enthusiasts and provide enhanced portfolio tools in a community setting. The variance unfurls like Instagram’s API subscribers: everything from consumer apps to B2B web solutions.

Most all share the view that online images are an untapped resource. In-image advertisers, like Znaptag, seek to push through ads on publisher sites (a similar tagging experience recently departed Stipple helped pioneer). The in-image ad market is heavily populated by incumbents from the ad industry – not photo tech – so often, like other photo tech startups, less emphasis is placed on image inventory and provenance thereof. It’s a volume game, and when the pipes are open wide – and where little regulation occurs – you can expect some trade-off around quality.

By quality, we don’t imply artistic integrity, technical attributes or commercial viability, but the rights associated with an image – the verification of source and the rights granted to an end user. There are many inferior images that reside with image licensing incumbents, just as there are many superior images being aggregated by photo tech startups. It’s how images are sourced, the process, that the industry needs to be vigilant over.

Platform does not equal inventory

The incumbents in photo licensing have the edge in inventory. Existing licensors like Shutterstock, Getty, and others have long placed barriers to entry that reduced-to-eliminated risk for their clients. It was a baked-in process that translated to client attraction and retention, and is still a critical cornerstone of their ability to productize their inventory. While photo tech platforms obsess (and stakeholders watch just as obsessively) over what rights are transferred by each user to them, very few actively qualify each image that is submitted to them.

For many, it’s an impossible task. They exist within the DMCA’s safe harbor provision, and cannot actively be aware of the types of images being submitted to them. With the foundation set, they’re reliant upon opt-in measures (500px, EyeEm, and now Flickr) to build inventory. While this might achieve some success, it is still a decentralized program apart from the main proposition of the platform. Few can create the foundation that a Shutterstock has, which focuses solely on aggregation and distribution for specific audiences. The initial proposition is key – once deviated from, noise level rises and mixed messages ensue.

Infringement claims are rising

Getty’s infringement business is big, and viewed by many pundits as “free” money. Sure, it doesn’t scale proportionately to inventory nor does it scale nicely against admin costs, but it’s growing and others are noticing and coming to the table. Claims aren’t only drawing solutions-minded intermediaries who promise to do the dirty work – this is also a photographer-driven incentive, and those who’ve been infringed upon demand retribution.

Adding to this trend is attention by the government to help copyright claims, which have long been out of reach by individuals due to court allocation and claim processes. Once the doors open up and help facilitate the claims process for infringing use, you can bet even more growth within the infringement industry will occur.

UGC is still perceived as the ‘unwashed masses’ by publishers…and it is

Photo tech startups view the world’s mobile captures as potential untapped inventory rife for exploitation, and in many cases it is, but major publishers are still quite wary of directly sourcing from UGC-based startups due to the inherent risks.

Publishers (and advertisers) will still require confirmation of source, or at least an end use license that provides warranties in instance of a claim. Even the incumbents slip up now and then (Morel), but such anomalies aren’t enough to produce a mass exodus of clients. Risk-aversion is still weighted heavily against startups, whose selection process is non-existent, and any automated or crowd-curated aspects to the platform don’t reflect the rigor expected by potential clientele.


Of course, photo tech isn’t aligned with rights on an image level. Notorious terms of services, of which Instagram’s was made famous, was created to be a rights grab. Most startups have adopted similar terms of service, as is common within the culture, but many are quite friendly and transparent. The commonality among them all is a decided pivot away from verifying rights of an image and providing assurances to end users, to shifting risk back onto participating parities on either side of their platform. Despite the volumes of images being added online every moment, copyright law still gives recourse to those who seek it.