Building Sustainable Architecture Across Domains: Insights from Vladimir Maruna

Vladimir Maruna, Vice President at mLogica, brings over 30 years of experience designing enterprise architecture across sectors like manufacturing, telecom, and finance. In this interview, he shares how his domain-driven, layered approach powers sustainable solutions—from mLogica’s CAP*M big data platform to large-scale metadata governance. With a focus on bridging architecture and development, Vladimir offers practical strategies for building adaptable, long-lasting systems.

1.With your extensive experience across diverse industries, how do you approach designing sustainable architecture that can adapt to different domains like manufacturing, telecommunications, and healthcare?

  1. One of the first steps in designing sustainable architecture is to establish and elaborate the domain specific knowledge, usually presented with corresponding ontology and taxonomy models. Those are not “software” artifacts per se but they enable us to freely operate in the domain and it sets excellent basis for next step which is proper separation of concerns ending up in the proper layering of conceptual architecture.  We are then able to distinguish, identify and separate “horizontal” capabilities, which are common and most of the time transcend to the domain, from “vertical” capabilities which are domain specific. Capabilities are then grouped into the modules and components where we try to limit and centralize the intrinsic complexity of the process, into a minimal number of those modules and components. Ideally only couple of them. At the same time, we try to minimize necessary interaction between them with strong and contractual interfacing which is technology and implementation independent. At this point, we have an established “framework”, out of “horizontal layers”, which is domain agnostic, consisting of a set of functional and non-functional requirements for horizontal components.  Once we have a conceptual architecture, we decide how to implement it within the solution architecture, what architecture and design pattern, we decide to use for particular solution.  That is lately proprietary and modified micro-service architecture pattern combined with also modified and proprietary data mesh information layer architecture. Having conceptual architecture in place provides the opportunity to decide and optimize its mapping and realization over the chosen solution architecture. 

2. Can you share some of the key challenges and solutions you encountered while integrating enterprise architecture governance and SDLC processes at mLogica?

  1. Main challenge is how to bridge the gap that exists between EA and actual development, with fundamentally different level of abstraction and details, and streamline designed architecture into the detailed design and later actual software artifacts. We automate information design, on conceptual and physical level using modeling tool, as well as we do specify in details interactive tooling requirements and advance processing component requirements, 
  2. Additional challenge is to keep the architecture in sync with the development, where we insist and enforce change of design before the change in the implementation for information design, and post-mortem update of architecture for utilization components and 
  3. Another challenge is to confirm if developed solution materialize architecture design why we establish separate Q&A team just to check if implemented corresponds to the design and architecture, what are deviations, and what we should do to sync those two. Here we are considering involvement of an AI component once we figure out how to formally specify the requirements and architecture to the level that is needed as input to it. 

3. How does the CAP*M big data platform differentiate itself in handling complex data environments, and what role does architecture play in its success?

  1. Main difference is in applied method and approach to the conceptual architecture, as explained above, what results in the specific layered approach where we distinguish following “vertical” layers: 
    1. Capture – to interface with the transactional/event source and to accommodate to its capabilities, data and information scope, content, communication protocols and similar, with the focus to collect and acquire data as close as possible as it is in the source, either receiving it as stream or batch, 
    2. Process – to extract, transform and unify captured raw data to the domain specific common format, partially and minimally validated and enriched with the business context, 
    3. Ingest and store – to accommodate, store, validate and manage transactional/event data as well as business context data, from reception stage schema, over Limited Business Context (LBC) to various and multiple delivery and utilization schemas, 
    4. Enrich – to collect and receive information from various external sources, contextual non-transactional information, to use it to set the transaction/event into corresponding business context in vast number of dimensions for later utilization and analysis, 
    5. Delivery – to “move” data between the layers, what includes extraction from the source, transformation to the destination format and load into the destination schema, equally for standard unidirectional “elephant trail” from stage to LBC to Utilization, but also for less standard paths like minimal contextualization and validation data set for Processing layer, or integrating the result of advanced processing on Utilization layer back to the LBC or similar , all based on precise, complete and correct metadata. This layer is also responsible for augmentation, generation or simple re-interpretation of data sets as required. 
    6. Utilization – as information extract established and delivered for particular purpose, either to be used for advanced processing, including ML/AI, or to be used to provide answers to domain specific business questions/inquiry provided over GUI, in form of interactive visual tools, or using corresponding Business Question Service layer (BQS), where number and structure of those data extract schema is not limited. 
  2. Our approach to information design and management is also particular, since we distinguish, as previously stated, transactional, contextual and master data sets, which are designed and treated differently, as LBC schema for contextualized transactional/event data, an unlimited number of Subject Areas for contextual data with full history of change, logically linked, which are highly adjustable and extendable,  and set of immutable, adding only, shared catalogues for master data,  
  3. Our approach is driven and constrained by the utilization needs and requirements rather than general analytics like in the case of DWH or Data Lake(house). We collect and manage only what is needed and are required to answer the required business questions/needs/inquiries, once that needs to be extended we first inspect if one of the existing utilization sandboxes can provide answers, then if LBC contains information needed for the answer, if we can find or extend external source to add what is required or we need to add or extend transactional/event source. 

4. Metadata governance seems to be a critical part of your work—how do you ensure effective metadata management in large organizations like banks and credit bureaus?

  1. I would rather say that metadata as well as master data management is for us of especial importance because we see that as one of the essential elements leading to excellence, in any and every dimension of the solution quality. 
  2. For metadata, which is mostly specification of architecture, schema design and elements of the delivery processes we use modeling tool, on conceptual and physical level, with all the details specified for implementation, utilizing either forward or reverse engineering using shared metadata repository capable for incremental and smart check in/check out. 
  3. For master data we recommend different approaches for simple master data, where we use centralized shared catalogues, usually for classification, life cycle, identifier types, various additional properties, type of logical relationships and similar, and for more complex master data, which is administered by our customer, is usage of internal and centralized Register to collect, unify, validate, integrate and share/serve all master data of supported entities like party, legal or natural, location and similar, organized as Subject Areas as explained above. 
  4. In case of large organizations, the main and biggest obstacles are to recognize needs and make decision to be driven by enterprise architecture. Once this is decided the main challenge is who is going to be responsible for what meta/master data and how to enforce and ensure its usage and governance. That is why we usually recommend the establishment of a central enterprise architecture department responsible for enterprise architecture, where we usually recommend adding data management group, responsible for information architecture, including design, data quality enforcement, schema designs and similar as well as implementation of central master data management including shared catalogs and central Register as explained above. This requires very careful and subtle phase approach, with well defined, transparent and realistic expectations, suited and accommodated to the organization’s capability and willingness, to avoid disappointments and failure since this will usually prevent second try for quite some time. 

5. How has your career evolved from CTO to Chief Architect, and what leadership lessons have you learned that you apply to managing technology and innovation today?

  1. My role as CTO also included Enterprise Architect responsibility for a long time, almost from the beginning of my professional employment, since I had this need and impulse to understand the language of the problem before approaching devising and designing the solution that is then going to be implemented. In addition, understanding wasn’t enough, because I also felt very strong about specifying grasped knowledge and understanding formally, in models and thus share it with the implementation team. At the same time that enabled accumulation of that knowledge, experience, lessons learned, all then used as my own expert base for further improvement of my professional skills and practice. Consequently, the lesson would be that knowledge accumulation and sharing, architecture driven approach, formal design and expression may require more time and effort but it always is beneficiary for oneself, organization, customer and for proffession.