Headless CMS. Why am I writing mine

Hello! This recent article prompted

me to write this publication (I saw it yesterday). Retell the main features of Headless / content-first / api-first, etc. I won’t be CMS, the material is full and probably many are already familiar with this trend. And I want to tell you why and how I write my system, why I couldn’t choose from the existing ones, what I think about other systems that I have encountered before and what prospects I see for all of this. Fiction will be voluminous (for the material in two years), but I will try to write more interesting and useful. Who cares, please, under the cat.



In general, the story is really long and I will try to tell it first. Either to make it more clear what are the true reasons for creating this engine of your own, or simply because without this it will be difficult on the ground to explain why I am doing it this way, and not some way.

But to begin with, I will nevertheless briefly write down the main selection criteria for the modern Headless-CMS for me personally, why I still could not choose a ready-made solution for myself. Just so that the people do not break off to read a lot of beeches, not understanding what will eventually be told.

Briefly: I wanted everything to be in one place: both the back and the front (and not either this or that), and the GraphQL-API, and that the database be managed and much more, including the “Make Beautiful” button. I haven’t found this. I myself have not done this yet either, but on the whole it has turned out quite a lot, and most importantly, it allows me to do real projects.

And so, my approach can hardly be called scientific and justified. The fact is that I generally very often write something of my own. I like to program here. And two years ago (and before that another 8 years) I sat on the MODX CMF (under which I also invented a lot of my crutches). And for three years we started one rather large-scale project, under which, it seemed to me, I could use MODX. But as it turned out, I couldn’t ... The main reason was that it was a startup without any technical requirements, with a bunch of ideas that changed and supplemented every day (and several times a day). And every time, when under a new idea it was necessary to add some new entity, register / change fields for existing ones, create / delete / change relationships between these entities (respectively, with a change in the database structure), at some point it took me several hours to change these entities. Indeed, besides the fact that it was necessary to register these changes in the scheme, it was necessary to change the database (almost manually), update the API, rewrite the program code, etc., etc. Accordingly, the front had to be updated under all this. As a result, I decided that we should look for something new, more convenient, which would somehow simplify all this. I’ll clarify again that at that time I was a php backend, so do not be surprised and do not laugh that I began to discover various front-end builders, less-processors, npm, etc. etc. But anyway, gradually in our project a front appeared on react + less, an API on GraphQL, and a server on express. it was necessary to change the database (almost manually), update the API, rewrite the program code, etc., etc. Accordingly, the front had to be updated under all this. As a result, I decided that we should look for something new, more convenient, which would somehow simplify all this. I’ll clarify again that at that time I was a php backend, so do not be surprised and do not laugh that I began to discover various front-end builders, less-processors, npm, etc. etc. But anyway, gradually in our project a front appeared on react + less, an API on GraphQL, and a server on express. it was necessary to change the database (almost manually), update the API, rewrite the program code, etc., etc. Accordingly, the front had to be updated under all this. As a result, I decided that we should look for something new, more convenient, which would somehow simplify all this. I’ll clarify again that at that time I was a php backend, so do not be surprised and do not laugh that I began to discover various front-end builders, less-processors, npm, etc. etc. But anyway, gradually in our project a front appeared on react + less, an API on GraphQL, and a server on express. that at that time I was a php backend, so do not be surprised or laugh that I began to discover various front-end builders, less-processors, npm, etc. etc. But anyway, gradually in our project a front appeared on react + less, an API on GraphQL, and a server on express. that at that time I was a php backend, so do not be surprised or laugh that I began to discover various front-end builders, less-processors, npm, etc. etc. But anyway, gradually in our project a front appeared on react + less, an API on GraphQL, and a server on express.

But not everything was as rosy as it would seem to many now. Let me remind you, this was more than two years ago. If you’ve been in the modern JS web for less than two years, I advise you to read this article: N reasons to use the Create React App(habr). Too lazy, in short: with the advent of react-scripts, you can not bother with configuring webpack, etc. All this goes into the background. Kind guys have already configured webpack so that most react projects are almost guaranteed to work on it, and the final developer focused directly on programming the final product, rather than configuring a bunch of dependencies, loaders, etc. But this is later. And before that, I just had to configure this webpack, follow the update of the heap of everything that flew with him to catch up, etc. etc. But this is only part of the work, only essentially the front. And you also need a server. And you also need an API. And you also need SSR (Server-side rendering), which, by the way, react-script still does not provide, as far as I know. In general, everything was much more complicated then than now, there was not much, and everyone crutched as best he could. And how did I crutch then ...

Just imagine:

  • Native webpack configuration separately for front and server.
  • Own implementation of SSR, so that async works normally with react-server, and styles immediately ready arrive, and are indexed normally, and server statuses for pages not found were given.
  • No-redux. Well, I did not like redux right away. I liked the idea of ​​using my native react flux (although I had to rewrite it a bit for myself).
  • Manually prescribed GraphQL-schemes and resolvers, without automatic deployment of the database (the API server was used as a middle for a MODX site).
  • No react-apollo / apollo-client, etc. Everything is written independently with requests through fetch, repositories in a browser based on custom flux.

As a result: until now, one of the first versions of this has one project with an attendance of 500+, and in the season (winter) 1000-1700 unique students a day. Uptime 2 months. This is because I manually rebooted the server after a preventive software update. And before this reboot, uptime was another 6+ months. But the most interesting is the memory consumption. There are currently almost 700 megabytes js process. Yes, yes, I'm laughing with you here too :) Of course, this is a lot. And before that I did a little prevention and improved this indicator. Previously, there was a total of 1000M + per process ... Nevertheless, it worked and was quite tolerable. And before Google changed the PageSpeed ​​Insights algorithms in November , the site had a 97/100 performance metric. Proof .

Interim conclusion based on this project on the basis of a system that developed further without this project (the project was left behind):

Pros

  1. The project API has become more flexible through the use of GraphQL, and the number of server requests has been significantly reduced.
  2. The project has access to a huge number of components on npm.
  3. Project management has become more transparent through the use of dependencies, git, etc.
  4. Scattered scripts and styles are certainly more pleasing than a bunch of separate scripts on old sites, when you don’t know what you can remove from this zoo without consequences (and you often see several versions of a bug on one site).
  5. The site has become more interactive, pages work without rebooting, returning to previously viewed pages does not require repeated calls to the server.
  6. Data editing takes place directly on the page, on the principle of "edit what you see and where you see", without any separate admin panel.

Cons (mainly for the developer)

  1. Everything is very difficult. Really. It’s simply unrealistic to connect some third-party developer to the project. I myself could hardly figure out what and how it works and where my legs grow from. If you look at p. 3 of the pluses, where it is said about transparency, then transparency is only in that if you hook something somewhere, you can immediately see what is broken (scripts do not build, etc.), but by commits and diffs You can find where that hooked. Well, if you managed to add something new and it works, at least you clearly understand that, yes, everything flew fine. But overall it’s still hellish hell.
  2. Difficulties with caching. Later, I discovered apollo-client for myself. And before that, as I said, I wrote my flux-based storages. Due to these storages, it was possible to get the necessary data for rendering from different components, but the cache volume on the client side was very large (each set of typical entities had its own repository). As a result, it was difficult to verify whether the object was requested earlier or not (that is, is it worth it to make a request to the server to find it), whether all related data is available, etc.
  3. Difficulties with schemas, database structure and resolvers (API functions for receiving / modifying data). As I said, I wrote schemes manually, and resolvers too. At what in resolvers I tried to provide caching, and processing of nested requests and other subtleties. At that moment I had to go very deep into the essence and GraphQL program code. The upside is that I generally understand pretty well how GraphQL works, what its pros and cons, and how to cook it better. The downside is that, of course, you cannot write all those amenities and buns written by commands like apollo in one. As a result, when I discovered apollo, of course, I began to use their components with great pleasure (but mainly at the front, I'll tell you why below).

In general, this project using outdated technologies is personally mine at 100%, so I can afford to abandon it until better times. But there are other projects for which I had to go further and develop the platform. And several times I had to rewrite everything from scratch. Further, I will talk in more detail about the individual tasks that I encountered and what solutions I developed and applied as a result.

Schema-first. First the circuit, and then everything else

A site (web interface, thin client, etc.) is all the display of information (well, information management, if allowed and functionality allows). But first, all the same, a database (tables, columns, etc.). Having encountered several different approaches to working with the database on my way, I liked the Schema-first approach the most. That is, you describe the schema of entities and data types (manually or via the interface), deploy the schema, and you immediately apply the described changes to the database (tables / columns are created / deleted, as well as relationships between them). Depending on the implementation, you will also generate all the necessary resolver functions for managing this data. Most of all in this direction I liked the prisma.io project .

With your permission, since even on the hub I have not seen a single article about the prism, I will draw attention to them a little, as the project is really very interesting, and without them I would not have now such a platform that made me so happy . Actually, that's why I called my platform prisma-cms, because prisma.io plays a very big role in it.

Actually, prisma.io is a SaaS project, but with a big caveat: they put almost everything that they do on a github. That is, you can use their servers for a very reasonable fee (and configure your own database and API for yourself in a matter of minutes), or you can fully deploy everything at home. In this case, the prism should be logically divided into two important separate parts:

  1. Prisma-server, that is, the server where the database is also spinning.
  2. Prisma-client. It is essentially also a server, but in relation to the data source (prisma-server) it is a client.

Now I will try to explain this confusing situation. In general, the essence of the prism is that using a single API endpoint, you can work with different data sources. Yes, here anyone will say that they all came up with GraphQL and prisma is not needed here. In general, everyone will be right, but there is a serious point: GraphQL only defines the principles and overall work, but by itself, it does not provide work with the final data sources out of the box. He says, "You can create an API to describe what requests users can send, but how you handle these requests is up to you to bother." And the prism also, of course, uses GraphQL (by the way, and a lot of other things, including various apollo-products). But the prism plus to this just provides work with the database. That is, describing the scheme and its deployment, the necessary tables and columns (as well as the relationships between them) will be immediately created in the specified database, and even immediately generate all the necessary CRUD functions. That is, with a prism, you do not just get a GraphQL server, but a full-fledged working API that immediately allows you to work with the database. So, Prisma-server provides a database and interaction with it, and prisma-client allows you to write your resolvers and send requests to prisma-server (or somewhere else, even for a few prisma-servers). And so it turns out that you can only deploy prisma-client on your own (and SaaS prisma.io will be used as prisma-server), and you can deploy prisma-server on your own, and in general in no way depend on a prism, that’s all yours. and even immediately generates all the necessary CRUD functions. That is, with a prism, you do not just get a GraphQL server, but a full-fledged working API that immediately allows you to work with the database. So, Prisma-server provides a database and interaction with it, and prisma-client allows you to write your resolvers and send requests to prisma-server (or somewhere else, even for a few prisma-servers). And so it turns out that you can only deploy prisma-client on your own (and SaaS prisma.io will be used as prisma-server), and you can deploy prisma-server on your own, and in general in no way depend on a prism, that’s all yours. and even immediately generates all the necessary CRUD functions. That is, with a prism, you do not just get a GraphQL server, but a full-fledged working API that immediately allows you to work with the database. So, Prisma-server provides a database and interaction with it, and prisma-client allows you to write your resolvers and send requests to prisma-server (or somewhere else, even for a few prisma-servers). And so it turns out that you can only deploy prisma-client on your own (and SaaS prisma.io will be used as prisma-server), and you can deploy prisma-server on your own, and in general in no way depend on a prism, that’s all yours. Prisma-server provides a database and interaction with it, while prisma-client allows you to write your resolvers and send requests to prisma-server (or another thread, even for a few prisma-servers). And so it turns out that you can only deploy prisma-client on your own (and SaaS prisma.io will be used as prisma-server), and you can deploy prisma-server on your own, and in general in no way depend on a prism, that’s all yours. Prisma-server provides a database and interaction with it, while prisma-client allows you to write your resolvers and send requests to prisma-server (or another thread, even for a few prisma-servers). And so it turns out that you can only deploy prisma-client on your own (and SaaS prisma.io will be used as prisma-server), and you can deploy prisma-server on your own, and in general in no way depend on a prism, that’s all yours.

Here I have chosen a prism for myself, as the basis for my platform. But then I had to spin it for myself in order to get a full platform.

1. Merge schemes


At that time, the prism was not able to combine circuits. That is, the task is as follows:

You have a user model described in one module

type User {
  id: ID! @unique
  username: String! @unique
  email: String @unique
}

and in another module

type User {
  id: ID! @unique
  username: String! @unique
  firstname: String
  lastname: String
} 

As part of one project, you want to combine these two schemes automatically to get the output

type User {
  id: ID! @unique
  username: String! @unique
  email: String @unique
  firstname: String
  lastname: String
}

But then this prism could not do. It turned out to implement this using the merge-graphql-schemas library .

Work with arbitrary prisma-server.


In the prism, the configuration is written in a special config file. If you want to change the address of the used prism server, you must edit the file. A trifle, not pleasant. I wanted to make the URL possible to specify in the command, for example endpoint = http: // endpoint-address yarn deploy (yarn start). That was killed for several days ... But now you can use one prism project for any number of endpoints. By the way, so far prisma-cms easily works even with a local database, even with SaaS prism servers.

Modules / Plugins


This was generally not enough. As I said, the main task of the prism is to provide work with various databases. And they do an excellent job of this. Already, they support working with MySQL, PostgreSQL, Amazon RDS and MongoDB, several more types of sources on the way. But they do not provide any modular infrastructure. There is so far no marketplace or something like that. There are only a few typical blanks. But you cannot choose two or three from several blanks and install on one project. We'll have to choose one. I wanted so that it would be possible to install a different number of modules on the final project, and that when deploying the circuits and resolvers would merry and get such a single project with the total functionality. And although there’s no graphic interface yet, there are already more than two dozen working modules and components, which can be combined on the final project. Here I’ll immediately decide a little about personal definitions: a module is what is installed on the back (expanding the database and API), and a component is what is installed on the front (to add various interface elements). So far, there is no graphical interface for connecting modules, but it’s not difficult for me to write this way (this is not often done):

  constructor(options = {}) {
    super(options);
    this.mergeModules([
      LogModule,
      MailModule,
      UploadModule,
      SocietyModule,
      EthereumModule,
      WebrtcModule,
      UserModule,
      RouterModule,
    ]);
  }

After adding new modules, it is enough to simply perform a deploy with one command again and that’s it, here we already have new tables / columns and augmented functionality.

5 front, responsive to changes in the backend


This was not enough at all. This will be followed by a digression. The fact is that all the API-first CMS that I saw say "We are awesome to provide the API, and you screw the front you want." This is what they “screw whatever you like” actually means “bother as you like.” Exactly the same as UI frameworks say, “look at what cool buttons we are and do all that, and get confused with the backend yourself”. That always killed. I just wanted to find a comprehensive CMS written in javascript, using GraphQL and providing both back and front. But I didn’t find one like that. I really wanted the API changes to be immediately perceived at the front. And for this, several substeps were completed:

5.1 Generating API fragments


At the front, fragments from the schema file are registered in the requests. When the API is rebuilt on the server, a new JS file with API fragments is also generated. And in requests it’s written like this:

const {
  UserNoNestingFragment,
  EthAccountNoNestingFragment,
  NotificationTypeNoNestingFragment,
  BatchPayloadNoNestingFragment,
} = queryFragments;
const userFragment = `
  fragment user on User {
    ...UserNoNesting
    EthAccounts{
      ...EthAccountNoNesting
    }
    NotificationTypes{
      ...NotificationTypeNoNesting
    }
  }
  ${UserNoNestingFragment}
  ${EthAccountNoNestingFragment}
  ${NotificationTypeNoNestingFragment}
`;
const usersConnection = `
  query usersConnection (
    $where: UserWhereInput
    $orderBy: UserOrderByInput
    $skip: Int
    $after: String
    $before: String
    $first: Int
    $last: Int
  ){
    objectsConnection: usersConnection (
      where: $where
      orderBy: $orderBy
      skip: $skip
      after: $after
      before: $before
      first: $first
      last: $last
    ){
      aggregate{
        count
      }
      edges{
        node{
          ...user
        }
      }
    }
  }
  ${userFragment}
`;

5.2 One context for all components


React 16.3 introduces a new context API . I made it so that in child components at any level it would be possible to access a single context without listing the previously desired types from the context, but simply indicating static contextType = PrismaCmsContext and getting all the charms through this-> context (including the API client, the scheme , requests, etc.).

5.3 dynamic filters


I also really wanted to. GraphQL allows you to build complex queries with a nested structure. I wanted the filters to be dynamic too, formed from the API-scheme, and allow us to make nested conditions. Here's what happened:


5.4 Website Builder


And finally, what I lacked was an external site editor, that is, a designer. I wanted the server to have only a minimum of actions to perform, and all the final design should be done at the front (including setting up routing, generating selections, etc.). This is a topic for a separate article, because among other things, I also wrote my crutchy wysiwyg editor for it on pure contentEditable, and there are a lot of subtleties. If I am restored to my rights and who will be interested, I will write a separate article.

Well, finally, a short demo video of the designer in action. Still quite raw, but I like it.


I’ll finish on that. I have not written much yet, what I would like to write, but so much has happened. I will be glad to comment.

PS: all source codes, including source codes of the site itself, are here .

Also popular now: