As part of the redesign of the Left Navigation bar, we are going to change the theme system on the Web to be more consistent with the theming on mobile: offering users the choice between a white theme and a dark theme.
Here is a preview of what the dark theme might look like
Over the last twelve months, we interacted with hundreds of cybersecurity teams. One of the common murmurs we are hearing is that it is increasingly harder to keep up with trends and threads in the security space.
In 2018, fifteen thousand vulnerabilities were discovered and the number of exploits doubled – resulting in about four new security articles getting published every second on the Web.
This is a problem we are very passionate about so we are excited to announce a new Leo Security Skill that allows you to prioritize within your feeds the articles that reference the most critical vulnerabilities.
It is a powerful way of focusing your attention on the 10% of vulnerabilities that matter the most – taking into consideration the CVSS score, the content of the article, the level of awareness of the CVE and the products/vectors your care about.
For example, here is a quick tour of how you can train Leo to prioritize the high severity threats related to Microsoft products.
Discover the Best Cybersecurity Sources
The first step, if you do not follow vulnerability sources yet, is to click on Add Content and search for #security or #vulnerability. You will see a list of about one thousand security publications, blogs, and subject matter experts you can easily add to your Feedly. Create a Vulnerabilities feed and add ten to fifteen sources.
Because Feedly is an open platform, you can add any source you want to follow that publishes an RSS feed.
Train Your Leo
The second step is to train Leo to prioritize the most critical vulnerabilities in your feed. Most security teams care about the top 10% of the vulnerabilities that have a CVSS score greater than 8 and/or have an exploit.
The Leo Security Skill allows him to either lookup or predict the CVSS score of a vulnerability mentioned in an article. So when a new article is published in your feed, Leo will first try to lookup the CVSS and exploit information from the Web. If there is no CVE or CVSS, it will try to predict the severity of the vulnerability based on the content and terminology used in the article.
Training Leo to prioritize high severity vulnerabilities around products . you care about is simple.
In the priority modeler, add a first layer of type Security Threat and select the High threshold.
Then add a second Topic layer and pick the list of products you would like Leo to track. Leo will combine both layers and look for high severity vulnerabilities mentioning the products you care about.
Read, Share, and Shine
Leo will continuously read your Vulnerabilities feed and when an article matches the high severity and mentions the products you care about, Leo will annotate that article and move it to your priority queue.
When you open your Vulnerabilities feed, you will first see the shortlist of articles Leo has prioritized. If Leo has found the CVSS information for the mentioned vulnerability, you will see it as part of the metadata of the article.
Prioritized article have a green marker with the name of the priority. If you click on that marker, you will be presented with a short explanation of why Leo prioritized this articles and the controls for you to refine Leo’s training.
This aspect around control and transparency is really important to us. It is what we call collaborative intelligence.
If you see an article or vulnerability that is particularly important, you can save that article into a Feedly board and configure that board to push the content to an email newsletter, a Slack channel or a Microsoft Teams channel. Boards are a powerful way to keep important articles for reference and easily share with your teammates.
Continuously Learning and Getting Smarter
One of the powers of Leo is that he is constantly collaborating with you and learning from you. If you see an article that is highly relevant, you can save it to a board and then use the content of that board to re-enforce Leo’s learning via a Like-board skill.
If Leo was wrong about detecting a vulnerability, assigning a severity to it, or detected a product you are interested in, you can at any point of time click on the down arrow icon (also called Less Like This icon) and provide feedback to Leo.
That feedback is process daily and used to continuously improve the various machine learning models used to power Leo.
Join the Leo Beta
The Leo cybersecurity skill was created over the last 12 months in close collaboration with two of the largest and most advanced security teams in Silicon Valley.
We are excited to hear what the Leo beta community thinks about this new skill! If you are part of the security team and would like to test drive Leo Cyber Security, please join the beta program.
This is the first step for us to bring some of the work we are doing with Leo and discovery to Feedly Mini. Let us know what you think by joining the Feedly Lab Slack community and expect to see more in the next three to six months as Leo matures
Thank you to all the teams who have sent questions, feedback, and bug reports!
Why didn’t I receive my newsletter? Common problems & solutions:
No new articles saved since the last newsletter sent. It will only send if there are new articles available (ie. saved) in the board.
Solution: For now, we suggest removing and then re-saving some articles to the board. After that, return to the newsletter dashboard and hit “send now” once again.
It works the same way the very first time you activate a newsletter and for your future scheduled newsletters.
Maybe the newsletter is in spam.
Solution: Please check your spam folder and add <email@example.com> to your address book. That will tell your email provider to deliver newsletters to your inbox.
What articles will (or won’t) be included in the newsletter when I hit Send Now?
On-demand newsletters only include new articles saved since the last newsletter sent. This is the most common reason why a newsletter doesn’t send.
To send an on-demand newsletter with specific articles, we suggest removing and then re-saving those articles to the board. After that, return to the newsletter dashboard and hit “send now” once again.
What about analytics?
How do I add newsletters to my Feedly account?
We suggest starting a 30-day free trial of Feedly Teams. The trial gives you full access to newsletters and our support team. We are here to help you and your team get the most out of Feedly.
Thank you for trying newsletters! Have a question not answered here? Ask us in the comments or in the app.
Connecting people to the best sources for the topics that matter to them has been core to our mission since the very start of Feedly.
But discovery is a hard problem. The web is organic, a reflection of the global community’s changing needs and priorities. There are millions of sources across thousands of topics and we all have a different appetite when it comes to feeding our minds.
About twelve months ago, we created a machine learning team to see if the latest progress in deep learning and natural language processing could help us crack this nut.
Today, we are excited to give you a preview of the result of that work with the release of the new discovery experience in the Feedly Lab app (Experience 06).
Two thousand topics
The first discovery challenge is to create a taxonomy of topics.
You can think of Feedly as a rich graph of people, topics, and sources. To build the right taxonomy, we started with the raw data on all of Feedly’s sources. We had to create a model to clean, enrich, and organize that data into a hierarchy of topics. Learn more about the data science behind this.
The result is a rich, interconnected network of two thousand English topics. And it’s mapped well with how people expect to explore and read on the Web.
On the discovery homepage, we showcase thirty topics based on popular industries, trends, skills, or passions. You can access all of the topics in Feedly via the search box.
The fifty most interesting sources
The second discovery challenge is to find the fifty most interesting sources someone researching any topic might want to follow.
Ranking sources is hard because not all sources are equal. In tech as an example, you have mainstream publications like The Verge or TechCrunch, expert voices like Ben Thompson, and lots of B-list noisy sources which don’t add much value.
In addition, for niche topics like virtual reality, some sources are specific to VR while others cover a range of related topics.
To solve this challenge, we created a model which looks at sources through three different lenses:
relevance (how focused is the source on the given topic)
engagement (a proxy for quality and attention)
The outcome is new search result cards. You can explore the fifty most interesting sources for a given topic and sort them using the lens that is most important to you.
One of the benefits of the new topic model is that the 2,000 topics are organized in a hierarchy. This makes it easy for you to zoom in or out and explore many different neighborhoods of the Web.
For example, from the cybersecurity topic, you can jump to a list of related topics that let you dig deeper into malware, forensics, or privacy.
One more thing…
We have done a lot of research over the last four years to understand how people discover new sources. One insight we learned is that people often co-read certain sources. For example, if you are interested in art, design, and pop culture and you follow Fubiz, there is a high chance that you also follow Designboom.
With that in mind, we spent some time creating a model that learns what sources are often co-read. The idea is that a user could enter a source that they love and discover another source they could pair it with.
As a user, you can access this feature by searching in the discover page for a source you love to read. The result will be a list of sources which are often co-read with that source.
I would like to thank Paul, Michelle, Mathieu, and Aymeric for the great research work they did to take this project from zero to one. People who have tried to tackle discovery know that it is a very hard challenge and the results of this project have been very impressive.
We would also like to thank the community for participating in the Battle of the Sources experiment. Your input was key in helping us learn how to model the source ranking. We are going to continue to invest in discovery and we look forward to continuing to collaborate with you.
We would also like to thank Dan Newman, Daron Brewood, Enrico, Joey, Lior, Paul Adams, Ryan Murphy, and Joseph Thornley from the Lab for reviewing an earlier version of this article.