Generating Instructions at Different Levels of Abstraction

When generating technical instructions, it is often convenient to describe complex objects in the world at different levels of abstraction. A novice user might need an object explained piece by piece, while for an expert, talking about the complex object (e.g. a wall or railing) directly may be more succinct and efficient. We show how to generate building instructions at different levels of abstraction in Minecraft. We introduce the use of hierarchical planning to this end, a method from AI planning which can capture the structure of complex objects neatly. A crowdsourcing evaluation shows that the choice of abstraction level matters to users, and that an abstraction strategy which balances low-level and high-level object descriptions compares favorably to ones which don’t.


Introduction
Technical instructions in complex environments can often be stated at different levels of abstraction. For instance, a natural language generation (NLG) system for tech support might instruct a human instruction follower (IF) to either plug "the broadband cable into the broadband filter" or "the thin white cable with grey ends into the small white box" (Janarthanam and Lemon, 2010). Depending on how much the IF knows about the domain, the first, high-level instruction may be difficult to understand, or the second, detailed instruction may be imprecise and annoyingly verbose. An effective instruction generation system will thus adapt the level of abstraction to the user.
In this paper, we investigate the generation of instructions at different levels of abstraction in the context of the computer game "Minecraft". Minecraft offers a virtual 3D environment in which the player can mine materials, craft items, and construct complex objects, such as buildings and machines. The user constructs these objects bit by bit from atomic blocks, and it is always possible to generate natural-language instructions which describe the placement of each individual block. However, it can be more effective to generate more high-level instructions. In the bridge-building example shown in Fig. 1, it is probably better to simply say "build a railing on the other side" instead of explaining where to place the seven individual blocks -provided the IF knows what a railing looks like. Minecraft is the best-selling video game of all time (200 million users), which means that there is a large pool of potential experimental subjects for evaluating NLG systems. Minecraft has been used previously as a platform for experimentation in AI and in particular for NLG (Narayan-Chen et al., 2019;Köhn and Koller, 2019).
We present an instruction giving (IG) system which guides the user in constructing complex objects in Minecraft, such as houses and bridges. The system consists of two parts: a hierarchical planning system based on Hierarchical Task Networks (HTN) (Ghallab et al., 2004;Bercher et al., 2019), which computes a structured instruction plan for how to to explain the construction; and a chart-based generation system which inputs individual plan steps and generates the actual instruction sentences (Köhn and Koller, 2019).
Planning systems generate plans based on expressive declarative models, making this approach easily applicable to a wide range of domains, and giving it the power to deal with large degrees of freedom Instructing the railing (here already built): in instruction generation. In particular, we leverage the hierarchical planning system to obtain three different strategies for describing complex objects as illustrated in Fig. 1: low-level, always instructing block-by-block (sentences (a) and (d)); high-level, always instructing to build the next complex subobject (sentences (c) and (e)); and a teaching strategy which first explains how to construct a complex subobject, and then uses high-level descriptions for that object in subsequent instructions (sentences (b) and (e)). We realize these strategies through designing the planner's action-cost function. The strategies constitute a first step towards a fully generic IG system which adapts its description of complex objects to the user's knowledge. The planner's cost function could also be used to incorporate additional criteria, e. g. that the generated sentences should be easy to understand in context.
We evaluate our IG system by crowdsourcing, using the open-source MC-Saar-Instruct platform for IG experiments in the Minecraft domain (Köhn et al., 2020). Each user receives instructions for building either a house or a bridge using one of the three abstraction strategies sketched above. The results show significant differences in completion times and user satisfaction across the three strategies. In the bridge scenario, the teaching strategy outperforms the high-level strategy in completion time and the low-level strategy in user satisfaction. In the house scenario, the high-level strategy outperforms the others in completion time for the walls, illustrating the importance of modeling user knowledge when choosing the level of abstraction.
Plan of the paper. After reviewing some related work (Section 2), we overview the architecture of our IG system (Section 3). We briefly introduce HTN planning, and how we use it to define hierarchical Minecraft construction planning models (Section 4). We then explain how to get from construction planning models to instruction planning models, and how to generate instruction plans at different levels of abstraction (Section 5). Section 6 describes our evaluation; Section 7 concludes. 1

Related Work
Generating natural-language instructions grounded in the mechanics of a real or virtual world is a wellestablished type of NLG task. For instance, the GIVE Challenge (Koller et al., 2010) required NLG systems to guide human users through a maze while referring to locations and objects. We follow the GIVE Challenge in situating the task in a virtual environment and using crowdsourced task-based evaluation, but the Minecraft world is much more complex than the GIVE world, and contains complex objects. Rookhuiszen et al. (2009) describe an IG system for the GIVE Challenge which dynamically adapts the level of detail of navigation instructions. They use a simple heuristic to switch between abstraction levels. Janarthanam and Lemon (2010) generate referring expressions at different levels of abstraction in an electronics repair scenario, and adapt to novice vs. expert users. They assume a finite set of possible descriptions for each object, and do not exploit the internal structure of complex objects.
The use of AI planning for elaborating and organizing the things that need to be said in an NLG system (i. e. discourse planning) dates back into the early days of NLG (Appelt, 1985;Hovy, 1988). Garoufi and Koller (2014) use non-hierarchical planning to compute communicative plans. They use planning operators which encode communicative actions, and allow them to have effects both on the communicative state and on the state of the world; we disentangle these effects in our hierarchical planning model.  use a hierarchical planner to generate technical instructions at different levels of abstraction, but their system can only utter sentences which were stored with the planning operators as canned text. Their evaluation does not show that users prefer their system over a baseline, illustrating the difficulty of generating instructions at the right abstraction level.
The Minecraft domain has been used extensively for various tasks in AI (Aluru et al., 2015;Parashar et al., 2017), including planning (Roberts et al., 2017) and natural language understanding (Gray et al., 2019). Regarding NLG specifically, Narayan-Chen et al. (2019) trained a neural model to generate building instructions in Minecraft; in the absence of symbolic domain knowledge, their model struggles to generate correct instructions. Köhn and Koller (2019) show how individual instructions can be generated in Minecraft. Their focus is on generating indefinite referring expressions to objects which do not exist yet because the user is supposed to build them. Here we do not address how to generate the individual utterances, but how to determine the semantic content of these utterances.

NLG System Architecture
Our overall IG system consists of two separate modules: an instruction planner and a sentence generator.
The role of the instruction planner is to compute an instruction plan, i. e. a sequence of instruction actions. An instruction action is an abstract semantic representation of a sentence; the sentence generator then translates it into a natural-language utterance, such as "place a block on top of the yellow block", "build a floor from the black block to the yellow block", or "now I will teach you how to build a railing".
The technical focus of this paper is on the instruction planner. Given a state of the Minecraft world and a specification for the complex object the IF is supposed to build (e. g. a bridge or a house), the instruction planner will compute a sequence of instruction actions as explained above. One technical contribution of this paper is that the instruction actions can be at different levels of abstraction; for example, the instruction plan can simply say "build a railing on the other side" in the situation of Fig. 1, or it can explain how to build the railing block by block. To ensure the correctness of the instruction plan, i. e. to ensure that the instructions, followed correctly, actually do result in the intended complex object, the instruction planner performs construction planning as part of its planning process: it internally refines its high-level actions into block-by-block plans and checks that these work.
The sentence generator takes instruction actions as input and produces natural-language utterances. We use the chart generation system of Köhn and Koller (2019) to generate sentences. This system generates sentences while simultaneously generating definite and indefinite referring expressions (REs); definite REs are used to refer to objects which already exist in the Minecraft environment, and indefinites are generated to refer to objects which do not exist yet because the IF is supposed to build them. Thus for instance, the sentence generator might translate the instruction action ins-railing(0,1,4,5,south) to "build a railing from the top of the blue block to the top of the red block" or "build a railing on the other side of the floor", depending on the state of the dialogue and the Minecraft world.

Hierarchical Construction Planning
As indicated, we distinguish between construction planning, which determines how a complex object can be built block-by-block; and instruction planning, which determines how, and in particular at which level of abstraction, the building of a complex object can be explained to the IF. Instruction planning encompasses construction planning as a sub-problem. We employ hierarchical planning for both tasks, with hierarchies of complex subobjects and the associated building activities. These are not required for construction planning per se (which can always proceed on a block-by-block basis). But they speed up the construction planning process, and they are key to instruction planning as proposed here. Previous work on planning in Minecraft (Roberts et al., 2017) has considered construction planning only, and has not considered hierarchical planning. We now introduce our construction planning models, which we will extend to instruction planning models in Section 5.

Background: HTN Planning
Hierarchical Task Network (HTN) planning (see Bercher et al. (2019) for a recent overview) comes with different levels of abstraction regarding the things that must be done, the tasks. Primitive tasks (also actions) can directly be executed in the environment. They come with conditions that need to hold to make them applicable, and their application changes the environment. Abstract tasks describe behavior at a higher level of abstraction. They are not applicable directly, but must instead be divided into other tasks by using decomposition methods. The new tasks may, again, be abstract or primitive. Methods are similar to derivation rules in a formal grammar where the left-hand side the abstract task and the right-hand side is its decomposition into other tasks/actions. Here we use totally-ordered HTN planning, a common subclass where the right-hand side is restricted to be a task sequence. Planners are given the overall tasks to accomplish, e. g. build a bridge. These are decomposed until only actions are left, which must be applicable in the initial state of the system.
We now give a definition using the basic formalism introduced by (Behnke et al., 2018). An HTN planning problem P is a tuple (F, C, A, M, tn I , s 0 ): • F is a set of propositional state features used to describe the environment. A state s is a truth assignment to these features, usually represented by the set of features true in the state. • C and A are sets of abstract (also compound) tasks and primitive tasks (also actions).
• M ⊆ C × (C ∪ A) * is the set of decomposition methods (where * is the Kleene operator).
• tn I = (C ∪ A) * is the initial task network, s 0 ∈ 2 F is the initial state of the environment. Furthermore P is associated with functions prec, add , and del that map each action to its preconditions, add-effects, and delete-effects. prec is a logical formula over F such that an action a is applicable if prec is satisfied in current state s. When a is applicable in s, the state resulting from its application is defined as (s \ del (a)) ∪ add (a) where add , del ⊆ F . The sets of all possible states and actions (implicitly) define a state transition system describing how the environment can change.
Plans in HTN are defined through task networks, which are sequences in (C ∪ A) * . If tn = ωc ω with c ∈ C is a task network and m = (c, ω ) ∈ M is a method, then tn can be decomposed with m into tn = ω ω ω . A plan tn S is a sequence in A * that can be obtained by iteratively decomposing the initial task network, and that is applicable in the initial state. Plan quality is measured in terms of a cost function, cost : 2 F × A → R + 0 , where the task of a planner is to minimize the summed up cost of the resulting plan. Deciding plan existence in the framework we use here is, in general, EXPTIME-complete (Erol et al., 1994;Alford et al., 2015). A range of solvers is available that tackles this complexity through search (Nau et al., 2003;Bercher et al., 2017; or compilation into simpler frameworks (Alford et al., 2009;Behnke et al., 2019a). Some solvers provide guarantees on the solution costs (Behnke et al., 2019b), or enable anytime behavior continuing search to find better plans .

HTN Construction Planning Models for Minecraft
As a compact explanation of our HTN construction planning models, consider the part of Fig. 3 that is highlighted in boldface, which shows a construction model for building a bridge. The propositional state features F take the form block(x,y,z), encoding whether or not there is a block at those coordinates. The construction of any complex object in Minecraft can ultimately be decomposed into put-block(x,y,z) actions, which correspond to the primitive tasks in the HTN model. 2 While all our Minecraft construction models share the same state features and primitive actions, they differ on the complex objects that can be built, O. In the case of a bridge, these are the floor, the railings, and rows of blocks, placed at different positions and in different orientations. Each complex object X has an associated abstract task BUILD-X. For example, the task BUILD-BRIDGE(0,0,0,5,3,NORTH) corresponds to building a bridge of size 5x3 facing north and starting at position 0,0,0. A construction model specifies one or more decomposition methods for each abstract task, corresponding to different ways to build the object. This is illustrated in Fig. 3 by the two decomposition methods for BUILD-RAILING. The HTN planning system, run on such a model, will choose which option is more suitable for the task at hand to minimize the overall action cost. In Section 5.2, we will specify cost functions suited to optimize instructions for an IF. Fig. 2 shows a plan. 0,0,5,3,north) is decomposed into a floor and two railings. Other decompositions are possible, e. g., constructing the railings in different order or direction. The tasks are further decomposed until a valid sequence of put-block actions is reached.
Minecraft construction models -in general, and in particular the models we devise here -are challenging for HTN planning systems due to the large number of objects required to characterize a 3D world. In our experiments, we use recent algorithms based on Monte-Carlo Tree Search, which do not require a grounding pre-process and are thus able to scale to comparatively large Minecraft worlds while optimizing plan cost .

From Construction Planning to Instruction Planning
Having discussed how to compute a construction plan with an HTN planner, we will now explain how to compute instruction plans. The actions in an instruction plan represent communicative actions, in which the IG system sends a sentence to the IF. These actions can describe the intended construction steps at different levels of abstraction. We capture this in the HTN model by defining a task for instructing the IF to build each complex subobject, say a railing. The planner can then choose to either achieve this task with a single primitive instruction action (which might send "build a railing" to the IF), or to further decompose it into smaller tasks, which will instruct the IF to build two blocks and connect them with a row.
One key challenge is that while the instruction actions represent natural-language instructions, they still need to be grounded in activities in the Minecraft world: we must guarantee that, if the IF follows the instructions correctly, then the correct complex object results. To this end, we reason about the construction and its explanation simultaneously. For every instruction action, the plan also contains the corresponding block-by-block construction actions, thus validating the instruction plan.

HTN Instruction Planning Model
We extend the construction model of Section 4.2 to an instruction model by adding the non-boldface parts of Fig. 3. Since we plan for instruction giving, we have to consider the IF and their knowledge. Some complex objects might be known to the IF (e. g. what a row of blocks is), others might not be. For example, if we instruct the IF to build the first railing for the bridge depicted in Fig. 1, the IF most likely does not know the exact shape this railing should have. Thus, we introduce a propositional state feature knows-T (see F in Fig. 3) for each kind of complex object T , representing whether or not the IF knows how to build such objects. We also incorporate information about the block the IF was instructed to place last. This is used by the cost function (see Section 5.2) to take into account that referring expressions are easier to generate, and to understand, for positions adjacent to the last placed block.
The instruction model also has new primitive actions A, whose names start with ins-; executing such an action corresponds to generating a sentence and sending it to the IF. First, for every instance X of a complex subobject, and for every possible block X, there is an action ins-X that represents asking the IF to build X. For complex subobjects, this action has the precondition that the IF knows the corresponding high-level concept. Second, the actions ins-teach-start-X and ins-teach-end-X correspond to utterances like "I will now teach you how to build a railing". These teaching actions have no preconditions and add knows-X to the state.

INS-BUILD-BLOCK
Here, knows-T is the feature where T is X's type, e. g. T = railing for X = railing(x, y, z, length, orientation) Figure 3: Illustration of our construction-planning model (bold face) and instruction-planning model. they allow the modeller to encode different possible instruction variants. Here, we explore this modelling power by allowing each INS-BUILD-X task to decompose in three different ways, corresponding to different levels of abstraction in the explanation of a complex object: • Decomposition L chooses to explain a complex object in terms of its parts. For instance, it decomposes an "instruct to build railing" task into two "instruct to place block" actions and an "instruct to build row" task. Always choosing Decomposition L results in low-level block-by-block instructions. The plan will alternate ins-block actions (instructing the user to "place a block") with put-block actions (ensuring correctness through construction planning). • Decomposition H generates a single instruction action for the object at the current level of abstraction.
For the railing, it uses the primitive instruction action ins-railing ("build a railing"). This action has a precondition knows-railing, i. e. this decomposition can only be chosen if the IF already understands the concept of a railing (either by initial expertise or by previous teaching through decomposition T, see next). The instruction is followed by the construction-planning task BUILD-RAILING ensuring correctness. • Decomposition T defers instruction to the lower level (like L), but also adds instruction actions ins-teach-start-X and ins-teach-end-X. These generate utterances like "I will teach you how to build a railing" and have the effect that the IF now understands the corresponding high-level concept (e. g. making the precondition knows-railing of ins-railing true) so that, later on, decomposition H can be used. Fig. 4 illustrates how the HTN instruction model interleaves instruction and construction actions to produce valid plans at different levels of abstraction. If Decomposition H is used, the resulting plan will . . . 2,4,5,south) ins-teachend-railing 1,4,5,south) (Decomposition T) Figure 4: Two example instruction plans for 1,4,5,south). Abstract tasks at the bottom layer need to be further decomposed: BUILD-RAILING could be decomposed as in Fig. 2, and INS-BUILD-BLOCK is always decomposed into an ins-block and a put-block action.
contain a single ins-railing instruction action. This will result in sentences such as (c) or (e) in Fig. 1, depending on context. If Decomposition T is used instead, the resulting plan will be much more detailed, with one instruction for every single block that needs to be placed, e. g.(b) in Fig. 1.

Designing the Cost Function
Given an HTN instruction-planning model as just defined, the HTN planner automatically decides which decomposition methods (in particular: L/H/T) are used at which points of the instruction, in a manner that minimizes plan cost. Instruction quality can thus be optimized by suitably defining the cost function. We demonstrate this by showing how the cost function, in combination with the initial environment state s 0 , can be used to realize different instruction-giving strategies. In other words, we show how the HTN planner can choose the appropriate level of abstraction. We have realized the following three strategies: Low-level This strategy always explains how to build complex objects block by block. The initial state does not contain any knows-X features, and the cost function assigns a high cost to instruction actions for complex objects (ins-X and ins-teach-X). Thus, minimal-cost plans always use the L decomposition. (cmp. Fig. 1 a and d) High-level This strategy always explains how to build complex objects with a single abstract instruction.
All knows-X state features are true in the initial state, i. e., the IF is assumed to know how to build all complex objects. The ins-X actions are assigned low costs, relative to that of ins-block actions. Therefore, minimal-cost plans always use the H decomposition. (cmp. Fig. 1 c and e) Teaching This strategy strikes a balance between the first two. Its initial state assumes that all knows-X state features are false; hence, like Low-level, it explains each complex object in simpler terms when it is first built. However, it encourages the use of decomposition T to do this, so that for later instances of the same object decomposition H can be used. This is achieved by assigning a low cost to ins-X and ins-teach-X relative to the cost of ins-block. (cmp. Fig. 1 b and e) These strategies could, of course, be implemented without the use of a general HTN planning system. However, our generic implementation readily handles deeper user models if available, and it facilitates the flexible combination with other criteria. In particular, the cost of ins-block actions can be used to model what the sentence generator can or cannot express easily. In general, this action cost could be determined by the sentence generator, allowing a deep co-optimization between instruction planning and sentence generation (which is future work, see Section 7). For now, we have realized this kind of combination through a simple model reflecting the fact that referring expressions are easier to generate, and to understand, for positions adjacent to the last placed block (which can, for example, be referred to with a pronoun). To this end, we assume a reduced cost for ins-block if the placed block is adjacent to the previously placed block (as encoded by the lastblock feature).

Evaluation
Data collection. We collected evaluation data by asking human subjects to build complex objects in Minecraft under the instruction of our IG system. Study participants were recruited through Prolific. Each participant played a single game, matched to one of the six conditions (three strategies × two scenarios). Participants were required to be fluent in English and own a Minecraft license. We obtained 20 to 25 plays per condition and paid each participant ∼10 GBP per hour.
In each game, the participant was first informed about the target structure ("Welcome! I will try to instruct you to build a [house / bridge]") and then instructed to build this structure until it either was complete or a ten-minute time limit was up and the player had placed at least five correct blocks. We told the participants explicitly that completing the building was not a prerequisite for getting paid, to reduce the risk of people quitting the study because of bad instructions. Either way, the participant was given a secret code word to enter after the game, to guard against cheating.
Each participant filled out a post-experiment questionnaire (see Appendix) after finishing the game. We only considered games for which we also obtained a questionnaire for the evaluation.
Scenarios. We designed two different scenarios to test the different abstraction strategies. The house consists of four walls, which are each four blocks wide and two block high, and four rows of four blocks each as the roof. The house is very minimal and has neither a door nor windows. We hypothesized that the high-level strategy would be effective in this scenario, because walls and rows are commonplace objects which the IF may know without needing to be taught.
In the bridge scenario, subjects were asked to construct the bridge in Fig. 1, which consists of three complex objects: the floor and two railings. The railings were specifically designed to be of a non-obvious shape for participants. Because of this, we hypothesized that the teaching strategy would work best.
Implementation details. We used the MC-Saar-Instruct platform (Köhn et al., 2020) to connect the IG system to the study participants. They played Minecraft on their own computers and connected to our Minecraft server, which then forwarded their actions to the IG system and the instructions back to the participants. Instruction plans were computed offline, to make the different games comparable and to ensure responsiveness of the IG system (computing a plan usually takes < 1 second but sometimes takes up to 7 seconds). The natural-language instructions were computed online, and changed as a function of the state of the world (e. g., referring expressions used different spatial relations depending on the IF's position in the world). Whenever the IF places a block incorrectly, i. e. in a position that is not consistent with the instruction plan, the IG system needs to guide the IF back on track. 3 In principle, re-planning methods (Ghallab et al., 2004) could be used for this purpose. Here we opted for a simpler solution: the IG system asks the IF to remove that block and then returns to the original plan. A similar heuristic applies when the IF removes a block which was already placed correctly.
Results. In addition to the questionnaire, we also evaluated the IG systems with respect to objective criteria (percentage of successfully constructed buildings, completion time, and number of incorrectly placed blocks). The mean results for each condition are shown in Table 1, including significance test results (Mann-Whitney U test). Below we also report 95% bootstrapped confidence intervals (CI).

Discussion
Bridge. The IFs took significantly longer to build the bridge under the high-level strategy (mean 275s (CI 217.0, 343.3)) than under the low-level strategy (177s (CI 154.8, 225.6)), and made more mistakes (36.9 (CI 24.3, 55.2) vs. 18.5 (CI 12.3, 28.1)). This is because IFs did not know how to build the very specific railing and thus needed to experiment for a long time. The teaching strategy, which uses high-level descriptions of complex objects only after first explaining them, is as fast and accurate as the low-level strategy (173s (CI 142.7, 226.5)).
At the same time, IFs subjectively rated the low-level strategy significantly lower on the "overall" (2.3 (CI 1.8, 2.7)) and "clarity" (2.4 (CI 1.8, 3.0)) questions than the teaching strategy (overall: 3.4 (CI 2.8, 3.8); clarity: 3.2 (CI 2.7, 3.6)). This confirms our starting hypothesis that low-level instructions can be perceived as tedious by users. Thus the teaching system strikes a good balance of task efficiency and user satisfaction in the bridge scenario.
Looking at the mean building times for the complex subobjects of the bridge, we found that the high-level strategy is much worse for the first railing (low-level 49s; teaching 44s; high-level 129s), but high-level and teaching actually outperform low-level on the second railing (low-level 38s; teaching House. In the "house" scenario, the high-level strategy still has a significantly higher mean task completion time than the low-level strategy (244s (CI 195.5,304.2) vs. 171s (CI 152.6, 203.1)) and a higher number of mistakes (29.5 (CI 19.6, 51.3) vs. 14.5 (CI 10.7, 18.4)). This seems to contradict our original hypothesis that IFs should be able to process high-level instructions even without explanation because walls and rows are familiar objects. Furthermore, unlike in the "bridge" scenario, the teaching strategy is slower than the low-level strategy (239s (CI 195.4,308.0)) and neither judged better overall (low: 2.9 (CI 2.4, 3.5), teach: 2.6 (CI 2.1, 3.1)) nor on clarity (low: 2.8 (CI 2.4, 3.2), teach: 2.4 (CI 2.0, 2.7)). This is puzzling -why should teaching the walls slow the IF down in a way that teaching the railings does not?
To answer this question, we analyzed the building times of the complex subobjects of the house. We find that the mean completion time for the four walls is actually lowest for the high-level strategy (76s, compared to low-level 97s, teaching 102s), confirming after all our hypothesis that high-level instructions are efficient for familiar complex objects. Where the teaching and high-level strategy fall behind is the completion time for the first two of the four rows that make up the roof of the house (low-level 53s, teaching 110s, high-level 155s). A closer inspection of the data suggests that this is because the instruction the sentence generator computed for the ins-row action, while semantically and syntactically correct, is hard to understand ("build a row to the right of length four to the top of the back right corner of the previous wall"). The low-level block-by-block strategy does not have that problem. Thus, the sentence generator for high abstraction levels must be tested and designed with special care.

A The Post Game Questionnaire
• Overall, the system gave me good instructions.
• I had to re-read instructions to understand what I needed to do.
• It was always clear to me what I was supposed to do.
• The system's instructions came too late or too early.
• The system was really verbose and explained things that were already clear to me.
• The system gave me useful feedback about my progress.
• Please add any comments or observations you had (free text) All but the last question are five-point Likert scale questions (Disagree completely -Agree completely).