In today’s post, we share insights from last spring’s TechForum presentation, Keywords and Discoverability on Amazon: A Canadian Context, given by Shannon Culver, Technology Manager at eBound Canada and consultant Amanda Lee. As ebooks and online shopping become more commonplace in the habits of Canadian consumers, eBound Canada researchers teamed up with Firebrand and Kadaxis to study the impact of keywords on ebook sales and discoverability.
While keywords have long been used in scholarly publishing, they are a piece of bibliographic metadata not usually shown to consumers of trade ebooks. Amazon is the only major retailer to use keywords, although the usage and logic in their keyword system remains opaque to most outsiders. For example, keywords in Amazon are used to help consumers purchase items, making the logic slightly different than the logic behind Google SEO keywords, which are used to search the web. Amazon keywords help Amazon guide consumers to relevant titles based on user search terms. Notably, Amazon does have a blacklist for certain words and will delete all keywords if one banned word appears in the metadata (the blacklist can be found on Vendor Central at amazon.com). When creating a keyword list, Amazon imposes a data limit of 250 bytes, in which spaces and punctuation do not count towards the limit, while special characters, such as accented characters, count more.
Based on this information, eBound Canada researchers created three groups of ebooks to examine the impact of keywords on ebook sales and discoverability. One group had no keywords, another group had keywords assigned to each title by artificial intelligence, and a final group of titles with keywords that had been assigned by humans. The humans were trained to assign keywords based on the best practices outlined by BookNet Canada and BISG, which consider keywords as words a consumer would use to search for a title. Additionally, each group included titles from different sales rankings, from high ranking top 25 titles to lower ranking titles that were seeing less sales. The project measured discoverability by counting the number of times an Amazon Product Detail Page was viewed.
A person trained to do keyword searches can do much of the job of the machine. The people creating keywords for this project were trained based on best practices and taking into consideration how users talk about books. Machines, on the other hand, generate keyword lists by combing through reviews and promotional text. AIs generate far more words than humans on their keyword lists but are also are more likely to include irrelevant words. For this reason, pulling keywords from a book’s content is not recommended, as readers are not searching by content, but rather by category. For example, a reader would not search for a book about lions using the keywords “claws, pride, manes,” but would instead use a search term like “lion book grade four”.
As with any research project, the keyword study had a few limitations. The results are only based on Amazon Kindle sales, as this is the only retailer that uses keywords. The study also only analyzes ebook sales, even if the path to the physical copy on Amazon is the same. Further, the scope of the study did not include examining whether certain types of keywords performed better than others. It is also important to note that from 2017–2018, sales of ebooks decreased globally.
Regardless of limitations, the results of the research were clear. Keywords had little effect on sales, regardless of who created the keywords, as ebook sales decreased across the board.
Nevertheless, keywords increased the discoverability of titles, as consumers are clicking through to keyworded titles more often. Additionally, there is more and more competition for keyword use as publishers add them to their workflow. For sales, it did not matter who created the keywords, but for discoverability, who created the keyword did matter. Overall, human-made keywords outperformed those generated by AI, especially for ebooks from youth, frontlist, and nonfiction categories, with the biggest impact being on frontlist non-fiction. When it came specifically to titles with low sales, human-made keywords outperformed machine-created keywords. For high ranking titles, human-made keywords and machine-created keywords performed equally well. The only case in which machine outperformed human was in the case of the top 25 titles. This exception is likely owing to the top titles’ greater amount of publicity and reviews from which the AI could pull keywords.
For more details about the study, the full report is available online as a pdf.