With the dawn of Wikis, the collaboration of endless users and the rise of automated agents scraping thorough the internet in search of actionable information, it has become a necessity to create a framework that could serve as a translator between these two parties. Currently, moderators or developers serve this purpose by manually adding the required structure to natural language input while being assisted by one or more assisting applications. In short, the process still requires human intervention and that is what we are aiming to eliminate using the framework suggested in this paper. The framework is divided into two parts, one natural language processing module that uses state-of-the-art Part-of-Speech (PoS) taggers and our own algorithms to convert a natural language text into triples made up of predefined predicates, which are provided to the machine users (autonomous agents). The generated triples are also complemented with their own schema thus enabling machine-based reasoning on the text. The other part i.e. the summarization takes the triples generated by the previous step and creates a natural language text out of it so that the human users of the system could access the information without any knowledge of underlying triples and schema. The framework is trained on a large corpus of English text to optimally find the subject and the object of a given sentence along with the most probable predicate. The predicates will be stored in a separate place in an XML based syntax giving the users the functionality to add/update the schema and predicates as the need arises enabling easy adaptation of the framework to a specific domain.