Grounding in RavenClaw

From Olympus
Jump to: navigation, search

In his book Using Language, Clark defines grounding as the process by which participants "establish [something] as part of common ground well enough for current purposes". In turn, he defines two people's common ground as "the sum of their mutual, common, or joint knowledge, beliefs and suppositions". There exists a rich literature and many theories in computational linguistics and psycholinguistics on grounding. For practical purposes, grounding in RavenClaw is the process by which the system constructs and updates its belief of the value of particular concepts. There are two types of grounding phenomena modeled by RavenClaw:

  • Turn grounding, which deals with cases where no information can be extracted from a particular user turn (non-understandings)
  • Concept grounding, which ensures that the system's beliefs on concept values are accurate, mainly through implicit and explicit confirmation.

Both types of grounding rely on the same core components:

  • A set of strategies, which are the actions the system can take to perform grounding. Usually, these are specific system prompts.
  • A policy by which the system selects a strategy to adopt at each turn

In the following, we will see how to define the strategies and the policy, as well as how to enable grounding in RCTSL, for both turn and concept grounding.

Contents

Turn Grounding

Overview

Turn grounding deals with non-understandings, i.e. turns in which the system believes the user spoke an utterance but is unable to extract any semantic information out of it. This can happen when the speech recognition hypothesis is completely meaningless, or when it's not parsable by the NLU, or when the parse cannot be interpreted in the current dialog context. Consider the example below, for a hypothetical flight information system:

 S: Where do you want to go?
 U: (no parse)
 S: ?

When such a situation occurs, the system can adopt many different strategies. For example, it can:

1. repeat the original prompt

 S: Where do you want to go?

2. ask the user to repeat what they said

 S: Could you repeat that?

3. provide some help

 S: For example, you can say "Boston", or "JFK".

4. abandon the current question and move on to a different topic (at least for now)

 S: Where are you leaving from?

Each strategy is likely to work best under certain circumstances. for example, if the non-understanding was due to a transient noise such as a cough or door slam in the background, asking to repeat might help. On the other hand, if the problem is that the user is using an out-of-vocabulary word, or referring to a functionality that the system does not have, providing help and examples of user queries is more likely to resolve the issue. All the strategies we consider here involve some specific prompts from the system. Therefore, the first step towards enabling turn grounding is to design the necessary prompts.

Writing the strategies' prompts

As are all prompts in Olympus, turn grounding prompts are defined in Rosetta modules. The specific module and prompt label depends on the strategies that you wish to enable.

Specifying a policy

Enabling turn grounding in the dialog task

The problem is that, at runtime, the system cannot know for sure what caused a non-understanding; it can only rely on observable features (Signal-to-Noise Ratio, ASR hypothesis, etc) to estimate which strategy(ies) is(are) most likely to help.

Grounding policies live in the Configuration folder (e.g. Configurations/Desktop-SAPI). As of Olympus 2.2, you can specify the list of policies files in two ways.

1. directly in the DM config file (often RavenClaw.cfg) as follows (e.g. see the Tutorial 2 system):

grounding_policies = expl_impl=expl_impl.pol;request_default=request_default.pol

2. in a separate file that you define in the DM config file (as in LetsGoPublic):

grounding_policies_file = grounding.policies

The contents of the grounding policies file, which also resides in the Configuration folder, is something like:

impl = impl.pol
expl = expl.pol
moffo = moffo.pol

These two methods define a policy name (which you will use in the SUBAGENT and CONCEPT definitions in your DM file) and the associated policy file.


Policy files for agents have a similar structure. For a hand-written policy, it first defines a small number of states using input features (mostly features computed by Helios, starting with h4_...) in blocks like the following. This example defines a state called UNDERSTANDING for any turn that is not a non-understanding:

STATE[UNDERSTANDING]
 [h4_nonu] = 0
END

After the state definitions, there is a matrix that defines the utility of each action. Actions are predefined in RavenClaw. Some example actions are below; the full list is in DMCore/Grounding/GroundingActions/AllGroundingActions.h

NO_ACTION
ACCEPT
EXPL_CONF
IMPL_CONF
ASK_REPEAT
WHAT_CAN_I_SAY
FULL_HELP
EXPLAIN_MORE
GIVE_UP
MOVE_ON
...

You do not need to use all actions in your matrix, the ones that do not appear will simply never be taken by the system (i.e. they have a zero utility). An example matrix is below. At each turn, the system decides which state it is in and picks the action that has maximum utility in that state (possibly with some randomness if the policy file has a line saying exploration_mode = stochastic, or exploration_mode=epsilon-greedy in its header, see example).

                FAIL_REQUEST   NO_ACTION     ASK_REPEAT    TERSE_WHAT_CAN_I_SAY
FAILED              10              -             -              -
UNDERSTANDING        -             10             -              -
FIRST_NONU           -              -            10              5
SUBSEQUENT_NONU      -              -             5             10


The Moffo policies, which are trained on data, use a different format for their policy file (it describes the parameters of the logistic regression models used by the system). See moffo.pol in the Configurations/Desktop-SAPI.

Grounding for Concepts

Grounding for concepts works roughly the same way (see the expl_impl.pol policy file) except that the set of state is fixed. The system constantly maintains a distribution over four states (INACTIVE, CONFIDENT, UNCONFIDENT, GROUNDED). The distribution is as follows:

p(INACTIVE) = 1 if the concept has not been introduced by the user yet and 0 otherwise p(CONFIDENT) = current confidence in the value of the concept if the concept has been introduced and not grounded yet, 0 otherwise p(UNCONFIDENT) = 1 - P(CONFIDENT) if the concept has been introduced and not grounded yet, 0 otherwise p(GROUNDED) = 1 if the concept has been grounded (confirmed), 0 otherwise

There are also only 3 possible actions: ACCEPT : don't do anything (which is what you usually do in the INACTIVE and GROUNDED states) EXPL_CONF : explicit confirmation ("Leaving from Boston. Did I get that right?") IMPL_CONF : implicit confirmation ("Leaving from Boston. Where are you going?")

For all these actions (both agents and concepts), you need to define the corresponding prompts in the NLG. There's a ExplicitConfirm.pm and ImplicitConfirm.pm files in LetsGoPublic\Agents\Rosetta\NLG for concept grounding and the agent grounding actions have their own specific requirements (usually you need to add variant prompts in Request.pm). See each action's header file (e.g. DMCore\Grounding\GroundingActions\GARepeatPrompt.h) for details.

Personal tools