Sunday, December 15, 2019

Conferences in 2019

As Christmas and New Year holidays are coming up, I wanted to reflect on two conferences in two different countries -- Estonia and Ukraine, that I had a pleasure to participate and / or organize this year.

AINL

AINL Conference (https://ainlconf.ru/2019/program), held in Tartu in November, focused a lot on applying deep learning to NLProc, with two tutorials by Dmitry Ustalov (Yandex) on Crowdsourcing on Language Resources and Evaluation and by Andrey Kutuzov (University of Oslo) on Diachronic Word Embeddings for Semantic Shifts Modelling. Andrey Kutuzov's tutorial was practical and involved some Python coding, resulting in a pull request: https://github.com/wadimiusz/diachrony_for_russian/pull/5 that I submitted for the task of comparing semantic shifts in meaning between Soviet and Post Soviet eras. This code uses Jaccard similarity as a local method for detecting shifts in meaning. There are also global methods, like Procrustes alignment, the only downside of which is it is slower, than Jaccard. You can read more detail on the task in Andrey's AINL slides.


Credit: Dmitry Kan



In terms of submitted papers -- the review process was double-blind and involved at least 3 reviewers per paper. The result was 30% acceptance rate and 12 out of 40 papers that did make it, focused on data acquisition and annotation, human-computer interaction, statistical NLProc (including paper by Ansis Bērziņš on usage of speech recognition for determining language similarity -- video) and neural language models (one of the works for morpheme segmentation using Bi-LSTM model cited the work of Mathias Creutz with whom we worked at AlphaSense 2010-2016).

Last day of the conference focused on the industrial applications of AI in NLProc. By invitation of Lidia Pivovarova (University of Helsinki) I presented on the search engine and NLProc work we've done at AlphaSense, including smart synonyms, sentiment analysis, named entity recognition and salience resolution, theme modelling and high-precision search.

One of the challenges for the industrial presentation was that it had to last for 1,5 hours. If you consider your audience ability to focus only for 40 minutes, you have got to do something else than 65 slides. I decided to make about 30 slides and then handle the rest of my talk with Q&A. The outcome has been very surprising to myself, because the audience did want to learn details of AlphaSense product, making the Q&A last for 50 minutes. Quite a few questions I managed to answer with the product itself -- this sparks genuine interest in understanding the UI of an AI product powering the financial industry. I hope this was beneficial for the audience to dive into the workflows of financial knowledge workers and how NLProc can help solve their daily routine tasks better.


Customer Development Marathon

Customer development is the topic that interests me from the point of the product development. Just recently I've learnt about jobs-to-be-done approach to mining for real jobs that your customers hire your product for. One example with which Clayton Christensen of Harvard Business School motivates this approach is the job that male consumers of milkshakes had on the their way to work every day: stay engaged in life during monotonous driving and stay full until 10 a.m.

The conference (or marathon as we called it) on customer development attracted 70 participants at iHUB co-working center in Kyiv, Ukraine. Speakers from various established companies -- YouScan, MacPaw, PromoRepublic, Competera, AlphaSense, Kyivstar, Terrasoft, PMLab, Portmone.com, Weblium, VARUS, SendPulse, EVO.company -- presented 5 min talks about specific cases on engaging with their customers to grow conversion, retention and happiness with their products. Following the presentations, the discussion panels dug deeper into how to implement a customer-centric business. 


Credit: Maria Kudinova



We've organized the marathon in 3 panels: 

  1. Idea. Analysis. Validation 
  2. Creation. Delivery. Launch and
  3. Sales. Feedback. Innovation. 


Each of these panels focused on a particular stage of product development from idea to post-sale feedback and innovation loop. The audience learnt about how to conduct an efficient user interview, what tools help reach out to new or existing clients, how not to push your product into consulting or outsource, how to establish an internal company-wide communication to stay on the same page when shaping the product, marketing and sales around customer needs.

Both events were full of networking, meeting new and familiar faces in the industry and academia and learning a lot. For anything you aspire to build next year, focusing on real value and ease of use of your NLP / AI / search products, and thinking what job your users hire your products for will help you serve them better.

Thursday, October 24, 2019

Eight thoughts on revolutionary changes


The Martian is my top favourite movie (and a book) that in action shows the excitement around engineering professions. Mark Watney, being left alone on Mars, fights for his life with all the knowledge and skills he has, from botany to chemistry and physics. Of all engineering professions, software engineering is probably the most booming right now in light of Artificial Intelligence breakthroughs. But does this profession have ethical aspects that we as engineers and humans need to be continuously thinking about?

I began to follow the work of Yuval Noah Harari and his call to the humanity on a potential big issue we are facing. It does not yet dawn at many of us to start thinking about potential threats to how we operate today. Many of us focus on day-to-day activities and may not have enough time to look beyond.

Right now I see two figures on the global scene that publicly speak with some urgency about AI, its potentially disruptive impact and change for the civilisation. Yuval Noah claims we are at a cross-road of getting new types of human beings, that will have supporting AI and biological improvements made to them. Elon Musk claims that AI is way smarter than human already (take Go) and we need to start thinking how to control it. And AI keeps improving for higher and higher degrees of freedom (from checkers, through chess, to Go it is a few orders of magnitude change in degrees of freedom each game allows). And so eventually AI will beat human beings in what is possible. One of the contemporary examples touching me personally is robots ironing clothes or wiping off coffee spills with high level of movement precision and similarity to that of humans:





But I’m sure Musk and Harari mean more than that.

A simple example Harari gives is: the IoT devices will record your pulse / level of endorphines as you see your political leader and so the government will know, how happy and faithful you are towards your leadership. Or what ads to show you depending on your sexual orientation based on what you have written / read / watched (even well before you understand your orientation yourself).

When more and more AI powered robots will take away the routine tasks, we as humanity will have two development paths: wear complacency and become lazier or seek creativeness. The first path is always reachable, especially in the time when we would work 3 days a week, 4 hours a day (by Jack Ma’s prediction). Will AI thus become even more dominant and take the lead over humanity? As by Musk eventually AI will write its own software and will be way more efficient in it, than modern AI engineers. And at some point humans having slower interfaces to produce / consume data and knowledge will be left behind AI and it can turn into a catch up game.

Given these potential issues that automation with AI is posing, those of us who focus on automation and AI touch on the ethical boundaries of our work. If you will, we are participating in the launch and acceleration of an AI revolution, that might not be visible for all people on the Planet yet. But we need to be aware of changing the society fabrics through rewriting job markets, work skills in demand and allowing new types of human / robot elites control people around them with AI.

I would like to share a different perspective on revolutions, Planet-agnostic:
  1. Making an industry level revolution is hard. There are many reasons, one of which is simply human laziness. Who in their sane mind would want to change anything in the production process, when it is comfortable as is?
  2. The revolution force should be so strong, that it is able to cover individual and collective laziness / resistance and still be obvious to anyone involved, that it is a change for good.
  3. As we progress into the future, multiple of these (small and big) revolutions will make life easier and hence more lazy participants will emerge.
  4. When laziness saturates, what direction the next revolution will take and will it really optimise for making things globally sustainable (like climate or flying to Mars) or locally to cater to individual’s needs to make us even more complacent?
  5. This leaves true revolution breakthroughs to unsettled minds, challenging everything they see. Which makes such people highly uncomfortable for lazy ones to be around.
  6. And naturally, the unsettled minds don’t have much time to enjoy the results of their doing (assuming the time span of a revolution is less than their lifespan).
  7. Yet lazy will eventually benefit from these revolutions and become lazier.
  8. The question then is: how to optimise for a global goal, while making as many on the Planet involved to keep knowledge and revolution results more evenly distributed?
I thank Derek Kannenberg and Tatiana Batanina for reading drafts of this essay and providing constructive feedback and thoughts.
Originally published at https://www.linkedin.com.

Monday, April 29, 2019

Company culture

Long gone are the days, when company culture did not matter or was a second-class citizen. Today, when choosing a company to work for, above all you choose the culture (may be even not realizing it and thinking that you are after technology or product). When you look at the job openings or office photos with employee smiles and general cheering atmosphere you will likely not see the culture of the company. You may get a glimpse of it during the interview process, but it is not enough.

Credit: https://www.inc.com/marla-tabaka/7-elements-of-a-great-company-culture.html

Why is culture important?

What is culture? Citing Wikipedia:

Culture (/ˈkʌlər/) is the social behavior and norms found in human societies.

To me company culture boils down to every day activities, like running projects, exchanging information and planning. I do not think that culture can be imposed. Observing it and declaring core values however makes sense.

You can make a simple test to see the edges of your culture: if you have two employees sharing the same language talking in the kitchen and third one -- of different nation -- enters, will the first two switch to a common to all three language? You can argue they don't have to. And this is where the culture begins: is it inclusive? Is it about socializing together or in smaller groups? This in turn will most likely affect on collaborativeness level amongst these groups during real projects.

Beyond language, there are many aspects of culture that directly impact the results of a company. Take decision making for one. When conflicting parties meet to discuss a pressing matter -- how will they exchange ideas? In what fashion will they criticize each other's ideas?



Why all this is of such importance? Well, it depends. Some will say -- "we don't care about the internal kitchen of how a result was achieved". But you can also ask yourself: what kind of place you would like to work at? Is it something where everyone contributes their share and want to be heard? Or is it the place where everyone (100% inclusively) knows that sharing ideas will be supported no matter how smart or stupid an idea is? "What does it matter, if the result is what it is" -- you may argue. If you are building a great place to work, you would like to make it great for everyone, not just a few. And the hardest part is to re-evaluate your culture as if you just joined the company (really hard, I know).

Ingredients of a solid culture

Building a good culture is a process that should evolve. There will be new people joining and new teams forming around these people. Through the lens of working in several small and mid-size companies (with hundreds of employees) I found the following ingredients of a company culture to really make a difference specifically for IT product companies.

1. Readiness to support. Do your engineers willingly walk an extra mile to support each other? Beyond sprint meetings, calls with product managers and status updates. Do they walk around the office / scan work chat and ask if they could help in passing? If there is a culture of support it prompts a lot of things to happen, like idea formation, knowledge sharing and general positive vibe in the office / chat. This combined makes work not only fun, but also more efficient. Again what to watch for here is "abuse" of helpers. It is best to be measured and rewarded in some way to see, if all tune in to the same way of collaboration.

2. Ego and ability to acknowledge own mistakes. It is not infrequent when knowledgeable people will tend to speak up / show off just because they carry the knowledge and may come across as arrogant. The cultural aspect to watch is ego. Less the better for internal communication / collaboration. This way you achieve higher level of inclusiveness -- everyone can learn from each other without mental punishment and is free to impact on larger architectures. Connected with low ego is a rare skill of acknowledging own mistakes. First, it shows that you value learning process. And second, you are human, not a metal-made robot that only improves. You can make a mistake, you are human and can communicate that freely. This shows a great example to peers, that making mistakes is not going to produce a career impacting drama and more -- they can even be rewarded for mistakes, because it is a crucial knowledge bit. Share it on a weekly demo! You may save time for your team.

3. Knowledge sharing sessions. Surprisingly a lot of information flies past engineers, when they are not involved in that particular project. Knowledge sharing sessions are the key. But not only within the team / adjacent departments. Overall on a company level. It is the venue to convey a large message, related to a process update in how tickets are filed or a way to document your component / feature. Or a way to break down a release. Taking first two points -- it is also a way to share some painfully earned bit or a glorious bit of system design, that would not be acknowledged by pretty much anybody unless light is shed on it with good colour.

4. Meaningful meetings. Meetings without prep are time eaters. Save 30 min for a prep-less meeting and ask relevant parties to prep for the next one. If you can avoid a meeting, do! It is way better for an engineer to go read a blog post on some tech / algorithm / system or spend extra time figuring out more test cases for their code. Don't waste their time by asking their statuses, unless it leads to a good discussion. There are other ways to share statuses over work chat for instance. How making meetings a king may kill your culture? People will evade them or use smart phones to mentally "fly away" while the other dude on the team shares a status. If you do such meetings, cap them at 15 min and ask everyone to put down their phones, listen and participate.

5. Culture of retros. Retrospectives (after a milestone or project completion) are a great way to achieve two things: a. Understand and plan on improving what went wrong. b. Release stress after a tough milestone / project. Saying that something needs an improvement in passing over a chat / email / call will lead to 0% positive outcome.

6. Have all folks in the company equally accessible. The early days of a startup enjoy full connectedness. So easy to lean over to a next desk and ask a question -- to everybody. The bigger the company, the harder it becomes: vastly different time zones, narrow focus in teams, "busyness" syndromes. If you are a top manager, take all efforts possible to be available for a chat. It will only help to retain good level of bonding and will help information circulate in the company. Consider it as a constant retraining of your staff. Sharing info in the memos? Might be the only way in triple digit headcount company. But better personal 1-1's.

7. Maintain wide focus. The issue I've seen in IT product companies (can apply to other industries too) is narrowing focus over time. The winning argument is: it helps with velocity of development. But there is a downside too: engineers will tend to narrow down their view of the product and eventually degrade as professionals. Generating ideas of how to improve the product, practicing dogfooding are the gateways to keep engineers motivated, learning and contributing. Roles are great because they identify the responsible drivers of a particular functionality. But getting input and feedback loop from thinkers and tinkerers (engineers) can push the product to new frontiers.


I hope these culture ingredients are useful to consider in your company. What other large cultural aspects you maintain in your company? Feel free to share!

Wednesday, December 12, 2018

Automatic writing with Deep Learning: Progress

This is a continuation of the post https://dmitrykan.blogspot.com/2018/05/automatic-writing-with-deep-learning.html. This item was reblogged at Writer's DZone: https://dzone.com/articles/automatic-writing-with-deep-learning-progress

Fast forward few months (apologies for the delay) I can share some findings.
Again, I think, we should take AI co-writer exercises with a grain of salt. However, during this time I have come across practical usage example areas for such systems.

One of them is augmentation of a news article writer. More specifically, when writing a news item, one of the most challenging tasks is to coin a catchy title. Does the title have some trendy phrases in it? Or perhaps it mentions an emerging topic, that captures attention at this given moment? Or reuses a pattern that worked well for this given author? Or just spurs an idea in the author's head?

Copyright: https://www.rogerwilco.co.za/blog/robot-writers-how-ai-will-affect-copywriting


In the following exercise I have set a very modest goal: train a co-writer on previously written texts with an attempt to suggest something useful from them. I could imagine, that this could be extended to texts that are trending or a collection of particularly interesting titles. What have you.

To train such a model I have used Robin Sloan's RNN writer: https://github.com/robinsloan/rnn-writer. The goodies of the project are:
  • Trained on Torch. Nowadays, Torch is leveraged via PyTorch, a deep learning Python library that is nearing its production readiness time.
  • The trained model gets exposed into an Atom -- pluginable editor (I'd imagine, real writers would want to have the model integrated into their favourite editor, like Word).
  • API is available too to integrate into custom apps (and this is exactly how it is integrated with Atom).

I will skip the installation of Torch and training the network and proceed to examples. The rnn-writer github repository has a good set of instructions to proceed with. I have installed Torch and trained the model on a Mac.

First things first: RNN trained on my Master's Thesis "Design and Implementation of Peer-to-Peer Network" (University of Kuopio, 2007).


The text of the Master's Thesis is about 50 pages in English with diagrams and formulas. On one hand, having more data makes NNs learn more word representations and should have larger probability space to predict next word given the condition of the current word or phrase. On the other hand, limiting the input corpus to phrases that have certain domain goal, like writing an email, could leverage a clean set of phrases that a user employs in many typical email passages.

As I got an access to Fox articles, I thought, this could warrant another RNN model and a test. Something to share next time.

Sunday, May 6, 2018

Automatic writing with Deep Learning: Preface


This article was also reblogged at: https://dzone.com/articles/automatic-writing-with-deep-learning-preface


Quite many machine and deep learning problems are directed at building a mapping function of roughly the following form:


Input X ---> Output Y,


where:

X is some sort of an object: an email text, an image, a document; 

Y is either a single class label from a finite set of labels, like spam / no spam, detected object or a cluster name for this document or some number, like salary in the next month or stock price.

While such tasks can be daunting to solve (like sentiment analysis or predicting stock prices in realtime) they require rather clear steps to achieve good levels of mapping accuracy. Again, I'm not discussing situations with lack of training data to cover the modelled phenomenon or poor feature selection.

In contrast, somewhat less straightforward areas of AI are the tasks that present you with a challenge of predicting as fuzzy structures as words, sentences or complete texts. What are the examples? Machine translation for one, natural language generation for another. One may argue, that transcribing audio to text is also such type of mapping, but I'd argue it is not. Audio is a "wave" and the speech detection is an okay solved task (with state of the art above 90% of accuracy), however such an algorithm does not capture the meaning of the produced text,  except for where it is necessary to do the disambiguation of what was said. Again, I have to make it clear, that audio->text problem is not at all easy with its own intricacies, like handling speaker self corrections, noise and so on.



Lately, the task of writing texts with a machine (e.g. here) caught my eye on twitter. Previously, papers from Google on writing poetry or other text producing software were giving me creepy feelings. I somehow undermined the role of such algorithms in the space of natural language processing and language understanding and saw only diminishing value of such systems to users. Again, any challenging tasks might be solved and even bring value to solving other challenging tasks. But who would use an automatic poetry writing system? Why would somebody, I thought, use these systems -- just for fun? My practical mind battled against such "fun" algorithms. Again, making an AI/NLProc system capable of producing anything sensible is hard. Take the task of sentiment analysis, where it is quite unclear what the agreement between experts is, not to mention non-experts.

I think this post has poured enough of text onto the heads of my readers. I will use this post as a self-motivating mechanism to continue the research with systems producing text. My target is to complete the neural network training on the text from my Master thesis and show you some examples for your judgement of the usefulness of such systems.

Saturday, May 5, 2018

AI for lip reading

It is exciting to push your imagination for where else can you apply AI, machine learning and most certainly -- deep learning, that is so popular these days. I came across this question on quora that provoked me to think a bit how would one go about training a neural network to lip read. I don't actually know what made me answer this question more: that found myself in an unusual context sitting on an Angularjs meetup at Google offices in New York City (after work, usual level tired) or the question itself. Whatever the reason, here is my answer:

Source: http://theconversation.com/our-lip-reading-technology-promises-to-make-hearing-aids-more-human-45166

I would probably first start with formalizing what is lip reading process from a human understandable algorithm point of view. May be it is worth to talk to a professional, like a spy or something. Obviously you need training data. Understanding, what is lip reading from the algorithm perspective will affect on what data you need.


    1. To read a word of several syllables you’d need a sequence of anchor lip positions, that represent syllables. Or probably vowels / consonants. See, I don’t know, which one is best. But you’d need to start with the lowest level possible out of which you can compose larger sequences, like letters -> syllables -> words. Let’s call these states.
    2. A particular lip posture (is that the right word?) will most probably map to ambiguous states.
    3. Now the interesting part is how to resolve the ambiguities. Number 2 produces several options. Out of these you can produce a multitude of words that we can call candidates.
    4. Then you need to score candidates based on some local context information. Here it turns into a natural language understanding.
    5. I'd start with seq2seq.

    Tuesday, January 16, 2018

    New Luke on JavaFX

    Hello and Happy New Year to my readers!

    I'm happy to announce release of completely reimplemented Luke -- using JavaFX technology.  Luke is the toolbox for analyzing and maintaining your Lucene / Solr / Elasticsearch index on low level. 

    The implementation was contributed by Tomoko Uchida, who also did the honors of releasing it.

    The excitement of this release is supported by the fact, that in this version Luke becomes fully compliant with ALv2 license! And it gets very close to be contributed to Lucene project. At this point we need lots of testing to make sure JavaFX version is on par with the original thinlet based one.

    Here is how load index screen looks like in new JavaFX luke:


    After navigating to the Solr 7.1 index and pressing OK, here is what luke shows:


    I have loaded an index of Finnish wikipedia with 1,069,778 documents, and luke tells me that the index does not have deletions and was not optimized. Let's go ahead and optimize it:




    Notice, that on this dialogue you can request only expunging of deleted docs, without merging (the costly part for large indices). After optimization's complete, you'll have a full log of actions in front of you to confirm the operation was successful:


    You could also opt for checking the health of your index via Tools -> Check index menu item:



    Let's move to the Search tab. It has changed slightly in that search box has moved to the right, while search settings and other knobs were moved to the left.

    Thinlet version:


    JavaFX version:



    It is more intuitive UI now in terms of access to various tools like Analyzer, Similarity (now with access to parameters of new BM25 ranking model, that became default in Lucene and default in luke) and More Like This. There is a new Sort sub-tab that lets you choose a primary and secondary field to sort on. Collectors tab however is gone: please let us know, if you used it for some task -- would love to learn.

    Moving on to the Analysis tab, I'd like to draw your attention towards really cool functionality of loading custom jars with your implementation of a character filter, tokenizer or token filter to form your custom analyzer. Test these right in the luke UI without the need to reload shards in your Solr / Elasticsearch installation:



    Last, but not least is Logs tab. Essentially you should have been missing it for as long as luke exists: getting a handle of what's happening behind the scenes during an error case or a normal operation.

    In addition, this version of Luke supports the recently released Lucene 7.2.0.