You are here
Search results
(21 - 32 of 32)
Pages
- Title
- Cortex-inspired developmental learning for vision-based navigation, attention and recognition
- Creator
- Ji, Zhengping
- Date
- 2009
- Collection
- Electronic Theses & Dissertations
- Title
- IPCA : an intelligent control architecture based on the generic task approach to knowledge-based systems
- Creator
- Decker, David Bruce
- Date
- 1995
- Collection
- Electronic Theses & Dissertations
- Title
- Developmental learning with applications to attention, task transfer and user presence detection
- Creator
- Huang, Xiao
- Date
- 2005
- Collection
- Electronic Theses & Dissertations
- Title
- An Analytical and experimental investigation of the elastodynamic response of a class of intelligent machinery
- Creator
- Sunappan, Vasudivan
- Date
- 1987
- Collection
- Electronic Theses & Dissertations
- Title
- Grounded language processing for action understanding and justification
- Creator
- Yang, Shaohua (Graduate of Michigan State University)
- Date
- 2019
- Collection
- Electronic Theses & Dissertations
- Description
-
Recent years have witnessed an increasing interest on cognitive robots entering into our life. In order to reason, collaborate and communicate with human in the shared physical world, the agents need to understand the meaning of human language, especially the actions, and connect them to the physical world. Furthermore, to make the communication more transparent and trustworthy, the agents should have human-like action justification ability to explain their decision-making behaviors. The goal...
Show moreRecent years have witnessed an increasing interest on cognitive robots entering into our life. In order to reason, collaborate and communicate with human in the shared physical world, the agents need to understand the meaning of human language, especially the actions, and connect them to the physical world. Furthermore, to make the communication more transparent and trustworthy, the agents should have human-like action justification ability to explain their decision-making behaviors. The goal of this dissertation is to develop approaches that learns to understand actions in the perceived world through language communication. Towards this goal, we study three related problems. Semantic role labeling captures semantic roles (or participants) such as agent, patient and theme associated with verbs from text. While it provides important intermediate semantic representations for many traditional NLP tasks, it does not capture grounded semantics with which an artificial agent can reason, learn, and perform the actions. We utilize semantic role labeling to connect the visual semantics with linguistic semantics. On one hand, this structured semantic representation can help extend the traditional visual scene understanding instead of simply object recognition and relation detection, which is important for achieving human robot collaboration tasks. On the other hand, due to the shared common ground, not every language instruction is fully specified explicitly. We proposed to not only ground explicit semantic roles, but also implicit roles which is hidden during the communication. Our empirical results have shown that by incorporate the semantic information, we achieve better grounding performance, and also a better semantic representation of the visual world. Another challenge for an agent is to explain to human why it recognizes what's going on as a certain action. With the recent advance of deep learning, A lot of works have shown to be very effective on action recognition. But most of them function like black-box models and have no interpretations of the decisions which are given. To enable collaboration and communication between humans and agents, we developed a generative conditional variational autoencoder (CVAE) approach which allows the agent to learn to acquire commonsense evidence for action justification. Our empirical results have shown that, compared to a typical attention-based model, CVAE has a significantly higher explanation ability in terms of identifying correct commonsense evidence to justify perceived actions. The experiment on communication grounding further shows that the commonsense evidence identified by CVAE can be communicated to humans to achieve a significantly higher common ground between humans and agents. The third problem combines the action grounding with action justification in the context of visual commonsense reasoning. Humans have tremendous visual commonsense knowledge to answer the question and justify the rationale, but the agent does not. On one hand, this process requires the agent to jointly ground both the answers and rationales to the images. On the other hand, it also requires the agent to learn the relation between the answer and the rationale. We propose a deep factorized model to have a better understanding of the relations between the image, question, answer and rationale. Our empirical results have shown that the proposed model outperforms strong baselines in the overall performance. By explicitly modeling factors of language grounding and commonsense reasoning, the proposed model provides a better understanding of effects of these factors on grounded action justification.
Show less
- Title
- 'Almost' real-time diagnosis and correction of manufacturing scrap using an expert system
- Creator
- Chesney, David Raymond
- Date
- 1987
- Collection
- Electronic Theses & Dissertations
- Title
- Representation and processes of pedagogic knowledge
- Creator
- Grossman, Harold Charles
- Date
- 1978
- Collection
- Electronic Theses & Dissertations
- Title
- Interactive learning of verb semantics towards human-robot communication
- Creator
- She, Lanbo
- Date
- 2017
- Collection
- Electronic Theses & Dissertations
- Description
-
"In recent years, a new generation of cognitive robots start to enter our lives. Robots such like ASIMO, PR2, and Baxter have been studied and applied in education and service applications. Different from traditional industry robots doing specific repetitive tasks in a well controlled environment, cognitive robots must be able to work with human partners in a dynamic environment which is filled with uncertainties and exceptions. It is unlikely to pre-program every type of knowledge (e.g.,...
Show more"In recent years, a new generation of cognitive robots start to enter our lives. Robots such like ASIMO, PR2, and Baxter have been studied and applied in education and service applications. Different from traditional industry robots doing specific repetitive tasks in a well controlled environment, cognitive robots must be able to work with human partners in a dynamic environment which is filled with uncertainties and exceptions. It is unlikely to pre-program every type of knowledge (e.g., perceptual knowledge like different colors or shapes; action knowledge like how to complete a task) into the robot systems ahead of time. Just like how children learn from their parents, it's desirable for robots to continuously acquire knowledge and learn from human partners on how to handle novel and unknown situations. Driven by this motivation, the goal of this dissertation is to develop approaches that allow robots to acquire and refine knowledge, particularly, knowledge related to verbs and actions, through interaction/dialogue with its human partner. Towards this goal, this dissertation has made following contributions i . As a first step, we propose a goal state based verb semantics and develop a three-tier action/task knowledge representation. This representation on one hand supports the connection between symbolic representations of language and continuous sensori-motor representations of the robots; and on the other hand, supports the application of existing planning algorithms to address novel situations. Our empirical results have shown that, given this representation, the robot can immediately apply the newly learned action knowledge to perform actions under novel situations. Secondly, the goal state representation and the three-tier structure are integrated into a dialogue system on board of a SCHUNK robotic arm to learn new actions through human-robot dialogue in a simplified blocks world. For a novel complex action, the human can give an illustration through dialogue using robot's existing action knowledge. Comparing the environment changes before and after the action illustration, the robot can identify a goal state to represent the novel action, which can be immediately applied to new environments. Empirical studies have shown that action knowledge can be acquired by following human instructions. Furthermore, the results also demonstrate that step-by-step instructions lead to better learning performance compared to one-shot instructions. To solve the insufficiency issue of applying the single goal state representation in more complex domains (e.g., kitchen and living room), the single goal state is extended to a hierarchical hypothesis space to capture different possible outcomes of a verb action. Our empirical results demonstrate that the representation of hypothesis space, combined with the learned hypothesis selection algorithm, outperforms approaches using single hypothesis representation. Lastly, we address uncertainties in the environment for verb acquisition. Previous works rely on perfect environment sensing and human language understanding, which does not hold in real world situation. In addition, rich interactions between teachers and learners as observed in human teaching/learning have not been explored. To address these limitations, the last part presents a new interactive learning approach that allows robots to proactively engage in interaction with human partners by asking good questions to handle uncertainties of the environment. Reinforcement learning is applied for the robot to acquire an optimal policy for its question-asking behaviors by maximizing the long-term reward. Empirical results have shown that the interactive learning approach leads to more reliable models for grounded verb semantics, especially in the noisy environments."--Pages ii-iii.
Show less
- Title
- CMOS VLSI implementations of a new feedback neural network architecture
- Creator
- Wang, Yiwen
- Date
- 1991
- Collection
- Electronic Theses & Dissertations
- Title
- Living real experience in virtual network environments in Pierre Teilhard de Chardin
- Creator
- Santos, Gildasio Mendes dos
- Date
- 2000
- Collection
- Electronic Theses & Dissertations
- Title
- Cortex-inspired goal-directed recurrent networks for developmental visual attention and recognition with complex backgrounds
- Creator
- Luciw, Matthew
- Date
- 2010
- Collection
- Electronic Theses & Dissertations
- Title
- Real time robot control over the internet with force reflection
- Creator
- Elhajj, Imad Hanna
- Date
- 1999
- Collection
- Electronic Theses & Dissertations