Wokshop note from ISEE'08

1st Table


The discussion on the first table rotated around three main questions.

(1.1) Methodologies (e.g., best practices) for informing the design of intelligent support in an exploratory environment.
(1.2) Where is the right balance between constraints (eases intelligent support) and freedom (the essence of exploratory environments)?
(1.3) What is the role of exploratory environments in classrooms?

Researchers on the first table included: Mihaela Cocea, Manolis Mavrikis (moderator), Richard Noss, Kyparissia Papanikolau, Niels Pinkwart.


Before getting into these questions we re-visited what we all mean by ELEs and particularly the characteristics of a microworld. We agreed that to qualify for a microworld an enviroment should have the following characteristics:

- an expressive 'language'

Regardless of how deep branching or complex alternative actions are available in an 'exploratory' environment, it is not a microworld if there are no building blocks, a tool set, language primitives that you put together to solve a given problem in a constructive way.

- there are no predifined solutions and there may be unexpected outcomes.

- there is substantial level of student control (e.g. students identify and set their own goals).

The extend to which these characteristics apply to all microworlds is questionable particularly as the need to make them intelligent sometimes requires sacrificing some of these characteristic (e.g. the level of student control).

We also identified that there are similarities between the research in ill-defined domains and mathematics learning in exploratory learning environments. Typical example is the legal-argumentation project Bruce and Niels are involved. The complexity and the freedom that students have are similar - albeit qualitatively different - to the actions of students when working in activities in microworlds. In both cases it is difficult to define the domain precisely. The challenge therefore is to define students' actions in a meta-level.

We then started discussion on:

(1.1) Methodologies (e.g., best practices) for informing the design of intelligent support in an exploratory environment.

We covered the obvious methods of literature, investigation and experimentation in order to model learning of a particular concept, skill. We touched briefly on the issue of domain and task dependency in the sense that models of learning have to do with the domain and the exact task the students is interacting with. Therefore, because we are inventing new tasks and new ways of interacting with systems we need to investigate how students interact and learn from them rather than just any similar tasks out of context.

In this process expert teachers can help. For example Niels mentioned using VLAB in pilot studies in order to identify the processes of scientific experimentation (e.g. extend the one from Van Joolinger et al.). Kyparisia mentioned relying on learning styles and then investigating differences in how students learn, interact etc.

(1.1) We discussed the type of feedback
- domain specific
- meta-level on
- process of problem solving or collaboration
- compare with solutions of other students

Apparent throughout the discussions was the issue of task domain. Although feedback can be provided in some particular parts of the domain (e.g. a mistake in a formula) it is possible perhaps to go a bit more general during problem solving strategies or collaboration processes. In other words, we often discussed that one way to avoid the complexity of providing feedback for specific domain is to identify the process of what makes a 'little mathematician' similar to what makes a 'little chemist' (i.e. the processes we would like to provide support with).

Richard reminded us that the challenge is not to solve 1000 equations but to reflect on the process of solving problems or of the collaboration process. Find similarities between cases and look how you solved that problem and now apply it in this problem. This is not what ITS give so far.

The idea of relating students to other students was discussed again and that the time is ripe to exploit students being on a collaboration space

This lead us naturally to

(1.2) Where is the right balance between constraints (eases intelligent support) and freedom (the essence of exploratory environments)?

We recontextualised the question on the basis of constraint not of the tools that are provided but on the task that students are asked and the types of feedback provided. In other words, the student interaction with the system determines the constraint.

During the discussion we identified a space on four dimensions that specify the constraints of the situations
- learner (their state specifies how much constraint is applied in the situation)
- affordances of the environment
- task and learning objective determine the trajectory of the lesson
- time

Throughout the discussions above we discussed many times the timing of feedback. This brought up the common issue in the field of synchronous vs asynchronous feedback about to intervene or provide feedback on demand, to allow them to see themselves their own cognitive conflict and ask for help (although students have an amazing way of making sense of the mistakes) or just provide feedback.

On this aspect, of help on demand, we all shared the worry of the gaming-the-system issue and how it changes between context,

(1.3) What is the role of exploratory environments in classrooms?

The original question was reframed to: "What can we capture from classroom characteristics that woks well and we can learn and enhance with educational software?"

This was linked to the 'help abuse' mentioned above. The issue of the difference of help in classroom from the teacher vs the system tutor concerned us since affective and social characteristics determine help-requests.

We also touched on the theme 'Collaboration in and beyond the classroom'. There are novel ways to take into account the social characteristics of the classroom. We briefly discussed peer coaching models where one student gets help from the system to help another system.
OLM and other visualisations can help the teacher help the students in a large classroom and seem particular suited to exploratory environments.

2nd Table


The second table dealt with the following technical issues:

(2.1) Which techniques can be used in an exploratory environment to (i) understand (i.e. model) the learners' actions, (ii) compare their
actions to their peers' and (iii) detect their goals?
(2.2.) How can information better be visualised to help learners and/or teachers?
(2.3) Which methods are better for encouraging/supporting collaboration in exploratory environments?

Researchers on the second table included: Dror Ben-Anim, Sergio GutiƩrrez-Santos (moderator), Bruce McLaren, Darren Pearce, Dimitra Tsovaltzi.



Dror presented his work. It has been developed to face the problem of having exploratory activities in a set of different domains. It started in Physics, but now has been extended after a University grant to other domains like music.

The main concept in the system is the virtual apparatus, a component that can be programmed independently. It provides an API for getting/setting data, and this permits the definition of states. A state is a particular configuration of the variables that a virtual apparatus provides. This configuration can refer to specific values of the variables or the ranges of values. These states can be defined by a teacher using the virtual apparatus to provide feedback, i.e. if the student is at some state, show this kind of feedback.

There was some discussion regarding whether the approach could be used for open exploratory environments or whether it needed some level of restriction in the virtual apparatus in order to be feasible. The discussion involved work made by Darren and others; Darren showed a mathematical microworld with which students create shapes and patterns based on square tiles. If states could be defined in such an open environment, the approach could be used.

Regarding the difficulty of modelling the exploration that takes place in an ELE, there was some discussion trying to find out aspects that could be modelled in any domain. It was agreed that, in any case, attempts could be made to develop domain-independent models; but it was clear that having knowledge about the domain where a model is defined makes a good difference (also see 2.extra, below). However, we concluded that the only modelling that could be performed across different domains was at the level of strategies (e.g. perform collaboration according to some script), not knowledge.

Regarding the groups for optimal collaboration, they could be made according to knowledge level; one possible strategy can be to mix good and bad learners, so that one can act kind of like a teacher for the other. (See collaboration (2.3) below.)


With regard to the support that a system can provide to the teacher, there were three ideas that looked like interesting and useful to everybody:


Bruce, Niels and Dimitra presented their work, which is giving a collaborative component to VLab. VLab is a virtual laboratory of Chemistry, on which the learners can perform several experiments mixing chemicals, etc. The final goal is to be able to accurately complete the calculation of the stoichiometric numbers in a balanced chemical reaction. VLab was developed on the first place by a Chemistry professor with programming skills.

In order to create this collaborative component they are integrating VLab with FreeStyler. FreeStyler provides an API so it can be integrated with the application.

Afterwards, there was a discussion about the relationship between collaboration and learning. With regard to this, one of the main points was the value of self-explanation to show/strengthen your own understanding. In this context, self-explanation is explaining things in your own words, either to yourself or to a peer learner. There was some discussion about what is the connection between self-explanation and performance in a task, but we did not get to any conclusion.

One important part of self-explanation, specially when it involves explaining to others, it its pragmatic value. When a child explains something without the supervision of a human teacher, we cannot be sure with state-of-the-art technology that her explanation is correct. However, if two kids are put together to explain each other something, they will make sure that the explanations are correct up to a point: one kid will not let the other go until she has managed to understand the issue, and that means that her peer will have to make a good job at explaining things (therefore, she must make a good job at making things clear in her own head). This means that the teacher does not have to be present at all interactions, and part of the work can be done by the learners themselves (the "marking" or "grading" is not done by the system, but by the peers).


There was another topic of discussion, related to all former three. How do you assess what the kids do? In an exploratory environment, this is related to understanding the actions of the student. We acknowledged that this is a process that is difficult even for experts (e.g. teachers), so it might not be possible to do it automatically. No conclusion was reached beyond that.

The discussion moved to the possibility of providing feedback to the students in a domain-independent way. After some discussion, the general agreement was that the results cannot be very useful without domain knowledge. Even real teachers are, after all, domain-dependent.


Thank you very much to all participants in the workshop, both those coming to Maastricht and those that have participated in the mailing list. It was a very productive workshop, and the results opened exciting lines for research until the next edition of ISEE. We hope to see you all again in the next Workshop on Intelligent Support for Exploratory Environments!