We study life-long learning conversational agent based on sub-symbolic representations of the sentences properly coded by neural networks based on the theory of semantic-based regularization. The agent continuously reads textual sources and creates correspondent semantic-based abstract representations, that are used for conversation. The semantics is acquired in the classic form of FOL (First-Order-Logic), and it is properly translated into corresponding real-valued constraints according to the t-norm theory. Our agents also acquire knowledge automatically, and actively interact with the purpose of properly enriching their knowledge representation. Their conversational behavior ranges from babbling to unrestricted conversational talks, where agents can also exhibit an active role. The implemented life-long learning process is driven by developmental principles, which lead to the gradual acquisition of knowledge, thus offering an intriguing perspective towards the breaking of the induction- deduction egg-chicken dilemma.