AI, IVEs
Creating an IVE – Part III: How IVEs make decisions

In this blog series, we will show you how to create an intelligent virtual entity (IVE) that can monitor your social media channels — such as Facebook, LinkedIn, or Twitter — and react to positive or negative posts about your products. To that end, we will need to set up the IVE and teach it to run a sentiment analysis on any social media post mentioning one of your products or brands.

In this part of the series, we will focus on how an IVE decides on what to do next.

IVEs have needs too

The magical thing about IVEs is that they can independently set goals and decide what actions to take next. The user defines an overarching goal for them, and they can figure out how to reach it. It does this in part by focusing on its “needs.” The IVE’s simulated needs are based on an extended version of Maslov’s pyramid of needs.
The categories of needs an IVE can have are, in hierarchical order from highest to lowest:

  • Self-Transcendence
  • Self-Actualization
  • Esteem
  • Love and Belonging
  • Safety
  • Physiological

At any given point, the IVE’s dominant need will be determined by the “starting need” defined by the user, as well as the experiences it has had during its “lifetime.” Each time the IVE fulfills its needs within a given category, it shifts to the next higher one. As its needs category shifts, so do its ambitions and goals in life.

Beliefs, desires, aversions and intentions

Every IVE has beliefs about the world it exists in. The collection of beliefs are called belief sets. Belief sets define what the IVE knows about the world and thus shape the goals it can aim for. But how does an IVE go about deciding on a goal?

As we have learned in the article about personalities, the IVE appraises the events that it perceives in its world for desirability. Desirabilities are linked to concepts, and these concepts allow an IVE to contextualize each new event.

When deciding for a goal, the IVE tries to achieve things it likes (positive desirability) and avoid or remedy things that it does not like (negative desirability).

In our social media project, for example, the concept “Negative Sentiment” would have a desirability of -0.8 assigned to it, which is strongly negative. This means that each time a post with a negative sentiment is made, the IVE would experience a negative emotion. It could make itself “feel better” by taking a desirable action, such as forwarding the angry tweet to a human customer service agent.

An IVE also tries to actively prevent low-desirability events from happening whenever possible. In this example, replying to a tweet in an angry tone would be linked to low desirability, so the IVE wouldn’t do that. It could, however, respond in a friendly tone to try to smooth things over, as this would be a highly desirable action.

When the IVE experiences a desire or aversion that is strong enough, it forms an intention. It uses these intentions, and its current state of needs, to select its next selected.

Deciding on goals

In the above chapter, we have seen how the IVE decides on its next goal. This is how an IVE operates if it is left to its own devices and free to do as it pleases. But IVEs have jobs, too! There are some things they just have to do, whether they like it or not. That is where “goal injection” comes into play.

Imagine if your boss were to stop by your desk, slam down a huge stack of paper, and tell you to get it done by 5pm today. He’s just injected a goal into your work day. And it works the same way with the IVE, except that you’re the boss and the IVE is obliged to do its job.

In our next article, we’ll explain how the IVE takes that pile of virtual paperwork and schedules its “workday” to get everything done for you on time — while completing its other routine tasks, as well.