In this paper, we describe an interface that demonstrates spatial intelligence. This interface, an embodied conversational kiosk, builds on research in embodied conversational agents (ECAs) and on information displays in mixed reality and kiosk format. ECAs leverage people's abilities to coordinate information displayed in multiple modalities, particularly information conveyed in speech and gesture. Mixed reality depends on users' interactions with everyday objects that are enhanced with computational overlays. We describe an implementation, MACK (Media lab Autonomous Conversational Kiosk), an ECA who can answer questions about and give directions to the MIT Media Lab's various research groups, projects and people. MACK uses a combination of speech, gesture, and indications on a normal paper map that users place on a table between themselves and MACK. Research issues involve users' differential attention to hand gestures, speech and the map, and how reference using these modalities can be fused in input and generation.