Hacker News new | past | comments | ask | show | jobs | submit login
Don’t we all just want to use SQL on the front end? (vjpr.medium.com)
148 points by vaughan on April 15, 2021 | hide | past | favorite | 179 comments



Because it exposes a lot of inner details and potential security/privacy risks if clients are able to change parameters.

But... it seems like if you're willing to expose your data model to the client, then it seems like some combination of code signing the SQL (with parameters left empty) along with the acceptable list of named parameters, e.g.:

Signed blob:

    {
      sql: "SELECT name, email, ... FROM users WHERE users.id = %(userid)s"
      allowedParameters: []
    }
    
Such that the client is allowed to send this query, but they're not allowed to provide any parameters, the backend has to provide the userid. That seems workable to me.

Likewise:

    {
      sql: "SELECT order.price, order.... FROM orders WHERE orders.user_id = %(userid)s AND orders.id = %(orderid)s"
      allowedParameters: ["orderid"]
    }
    
This would allow the frontend to provide an order ID, but not the user ID - so the user still can't look at other people's orders and if not logged in, it's an invalid query without a userid. And the client isn't allowed to change the "sql" or parameters sent or the signature is invalid.

Edit: Someone else replied that if you have sufficiently advanced build tooling, you can statically extract the SQL/GraphQL from the client side code and replace them with IDs, and then move the SQL to the backend. That's very cool, but the average startup isn't using bazel/buck/etc. Those tools aren't yet small app friendly enough (which is unfortunate, IMO.)


> But... it seems like if you're willing to expose your data model to the client, then it seems like some combination of code signing the SQL (with parameters left empty) along with the acceptable list of named parameters

Or...you could just expose a schema with views and sprocs where their definitions use only the SQL you are comfortable with against the base tables, which live off in their own schema isolated from external users. Same effect, but doesn’t require a whole new custom layer on top to reimplement what has been a basic feature of RDBMS engines for decades.


This is a really interesting interface design, but seems like for any non-trivial database it would be too easy to make mistakes and give the client too much access. Also seems like it's be really easy to end up with performance issues. Also, there's basically zero information hiding. The database is your interface. Changing the schema in any way whatsoever is a breaking change to the API! You haven't really saved yourself from the hassle of designing and maintaining backward compatibility for the API. You've just turned your database schema into the API :'(

But I haven't actually seen a codebase that does this, so maybe I'm wrong. Have you used this approach in a project with more than a few developers and a large database? How did it go?


> You haven't really saved yourself from the hassle of designing and maintaining backward compatibility for the API.

Another way to look at it is that changing the database schema is already a problem for the API which has to talk to it, so you're not really saving yourself any work by putting an extra layer between the frontend and the database.

In fact, if anything, you're doubling the amount of work, because there are now two interfaces (DB to API, and API to frontend) whose backwards compatibility you have to worry about.


Yes. The decision calculus here really boils down to how many clients the API has and how much you care about those clients.

If you have 100 other teams each with 5+ developers consuming your API -- or especially if your API is a product -- then keeping the API to frontend interface relatively stable with rare breaking changes is much more valuable than saving yourself even a major time commitment on DB to API work.

On the other hand, if you have relatively few clients -- or if the clients are "lower status"/"cheaper" teams -- saving time on the DB to API work a the expense of client teams might make sense.


> You've just turned your database schema into the API

It might actually be quite easy to version an SQL api, because you have the DDL migrations files and could actually make things backwards compatible to some degree. You would have a translation layer to modify the queries. Of course, your migrations would need to be more detailed tracking how things map to each other. But probably it would get too complex in the end. But not that much different to changes to a REST api really.


This sounds like a good way to spend 2 years happy and then in the 3rd year spend a monumental amount of money hiring a world-class compilers/databases expert to 10x the perf of DB migrations :)

But, yeah, this happens with REST APIs too.


I’m basically doing this for an app - each endpoint is just a parametrized query and RLS in the database enforces security/auth constraints.

The middleware doesn’t do anything except populate query parameters and do a bit of sanity checking.


Not quite sure I get what's signing the request. Presumably if it's the client, it can be tampered with, so we're guessing that there's still an API layer signing things?

That said, I guess you could just add row / column ACLs to the database user, and then map the database user to the app user.


> Not quite sure I get what's signing the request.

Server sends the query so server can send the signature hash with it, so the server signs the query with the server's key and sends both query and signature.

If the server ever receives a query without a signature or with a signature that is incorrect it can ignore or generate an error.


> Someone else replied that if you have sufficiently advanced build tooling, you can statically extract the SQL/GraphQL from the client side code and replace them with IDs, and then move the SQL to the backend.

We’ve taken this approach for implementing database queries in Lowdefy [0] (SQL support will be live in next version, implementation using knex [1]. Also a “shared state” between backend and frontend makes parsing paramaters to queries on the backend really seamless and in return a great dev experience. Also allows to to only parse some parameters like secrets only on the backend.

What we are experiencing with Lowdefy apps is that bringing the data model closer to the UI makes coding and UI maintainability of simple projects very easy. This approach works exceptional for us in the BI reporting space where you primarily aggregate data.

However as soon as you move out of the CRUD space to more complex backend logic, extracting the logic to a single interface again simplified the implementation a lot, in fact apps very quickly becomes hard to maintain in my experience.

This does not mean that there is no place for easy CRUD dev tools like OP, but I do believe that an API like solution is required when life gets more complex or even just transactional. But there is also more creative ways to solve this problem.

[0] - https://github.com/lowdefy/lowdefy [1] - https://github.com/knex/knex


Another issue is part of what makes a company valuable is their biz logic. Moving all of that to the frontend means you can no longer guard it. Even if you can extract it to the backend, how many REST/GQL endpoints use a single query? And where do the parameters get supplied if not from a route specific handler?


> you can statically extract the SQL/GraphQL

Yeh, that's the best idea. A babel plugin could do it easily. I already have have watcher script calling `graphql-codegen` to add types to my graphql queries, which works fine in the workflow.


Or you could could use Row Level Security, and/or stored functions/procedures for access controls. You can put these constraints in the DB, and just give each of your users a DB role.


I don't think you would use SQL directly. You do want something like GraphQL or FirebaseDB where the language is designed around untrusted users.


I've spent the last six months working with a codebase that does exactly this. Aside from the obvious problems with exposing your schema to potential attackers, opening up potential DOS vectors, and training front-end engineers on yet another technology, you end up with some code that is very, very difficult to test. If you're using it in your hobby project and you're aware of the pitfalls - fine, go ahead and do whatever you like. But if you're expecting to incorporate this as part of an engineering organization: Save everyone the hassle now and don't.


These are the types APIs that I CONSTANTLY have to tell our internal guys "Hey, don't do that!"

I get WHY you'd want to be able to throw in any random filter/sort to get the exact datapoints you are after, but that's both really hard to scale and often ends up being highly coupled to the underlying datastore.

I get it, requirements gathering and making a clean API is hard and time consuming. What's harder is not doing that and needing to eventually figure out "How on earth can I get constant response times or stop someone from sending an app crashing query all while supporting current usages".


Yeah; reading this the only 'gain' I saw was "we don't have to define an API". But instead you get to try and ensure that raw SQL from the client is always safe.

The former isn't THAT hard to do. The latter is extraordinarily hard.

Get the former wrong and you fix it before it ever goes live. Get the latter wrong and you fix it only after you're massively pwned.

This is just a bad tradeoff.


Agreed. The issue is it's "easy" to do if you have a massive data model and can feel pretty fun making a query-able API. What's more fun? Making your own DSL or writing a billion "/foo/{id}" endpoints. Usually the DSL is a lot more fun and rewarding, however, "/foo/{id}" is a FAR more stable and easy to maintain API.

That's where I've often run into the problem. When someone wants to make a monolith into a more microservice thing the hard part of carving out monolith usages takes meetings with consumers. Devs don't want a bunch of meetings and requirements gathering and a lot (at least in my org... :( ) would rather just write something cool and throw their hands up and say "We don't know how you'd use this, so we made you the boss".


A lot of internal enterprise tools explicitly have "filter everything" as a feature, so in those cases you might as well use a super-generic API, since you'll need that flexibility anyway.


Security problems aside, testing is a much more present and pragmatic reason to say "This is a terrible idea."


Side-effects as well. You'll still need an API in order to handle things like sending transactional emails, kicking off batch jobs, or producing other side effects. It doesn't really spare you from developing an API for any non-trivial application.


Absolutely. Not to mention, it feels like a bleeding of concerns to me. The simplest answer to me is "I don't _want_ my front end concerned with SQL queries"


The discussion here is frustrating because people are assuming that you'd pass the SQL directly to your database backend and expose your entire schema and all your data.

SQL is just a query language, just like GraphQL, and is no less 'secure'. You can still have a layer between the front end and application database.

A practical way to use SQL would be to expose the subset of data that is visible to the user and allow that to be queried by SQL, just like it would be by GraphQL or REST.


Astronomy has been doing that for just over a decade: https://www.ivoa.net/documents/TAP/20100327/


We are doing this under the covers in our app using https://github.com/kofrasa/mingo to let us write data queries on the SPA side.

It lets us build reports and other configurable data queries in a standard way for our case.

I can completely see how this makes sense using SQL although I think the complexity of the implementation might be a fair bit higher than mingo.


I made the mistake of reading the comments before the article, and was stuck with the thought process of "but it's insecure"

Then I remembered my experience of working with an SQL-like based API, that was a pleasant surprise and joy to work with.

I do not see why we can't use it from a client-side perspective, and safely re-cast it server-side.


Okay, but what about the performance question? I assume this would apply to any language wrapping SQL. How does GraphQL prevent people from sending pathological queries that would wreck DB performance?


I think we've all wondered this at some point in our careers.

It's something you should never do. Off the top of my head:

1) Security (as has been beaten to death here)

2) Interface versioning. If you expose a generalized SQL interface to your consumers directly, good luck ever making a change to your database schema. You'll have no idea whose workflows you break. This high coupling becomes very painful very fast.

3) Abstraction. The way data is stored is often not the way data should be surfaced. What about application layer data integrity, things like that? This would require the frontend client to have far too specialized of knowledge as to how the backend works.

4) Optimization. How do you optimize for queries you don't control? Someone will craft something that can bring your database to its knees under load without even trying to.


Totally valid points. I think this can be solved by a change of layout. If you do use sql queries in the frontend but have a layer between your front and your database to map which queries are valid and how to map your queries to your own DB representation.

You solve a big chunk of your problems that way.


That sounds way harder than just defining an API for the front-end to use.


I think, basically, yes. What GraphQL really solves is letting the frontend compose complex, yet specific, queries. SQL lets you do the same thing. But REST APIs and gRPC-based APIs generally do not.

I guess what I would argue against in this post is that SQL is the right language for this. The post starts off with the assertion that we're just using SQL on the backend anyways - not so, in fact my company doesn't have a single SQL database.

But I think the overall point remains - if you want powerful clients you can't beat a query language.

Apache Calcite comes to mind: https://calcite.apache.org/


Actual, a budgeting app built by @jlongster (creator of Prettier), is a local-first webapp built with in-browser SQLite. It uses CRDT's to sync state to the server.

A tweet storm: https://twitter.com/jlongster/status/1341586372252078083

A blog post which mentions some of how it works: https://actualbudget.com/blog/porting-local-app-web

A more recent tech talk (which I haven't watched): https://blog.fission.codes/building-actual-budget-with-james...


Well, back in the "olden days" in the late 90's, this was actually pretty common: ColdFusion allowed SQL statements to be embedded in the templates, and JSP had sql tags. IIRC, most people moved away from that model primarily for flexibility and maintainability: having SQL statements mixed in with the presentation led to a lot of code duplication and made putting a different front-end (say, a third party API) nearly impossible.


I use to build sites in PHP where the front and backend were basically the same thing, with only the database existing outside the single codebase. It was definitely common for there to be sql injection vulnerabilities and years later i shuttered at the thought of some of those companies still writing code I wrote when I was young and knew almost nothing I’d security. Luckily now days most of them have switched to new code bases (or went out of business) but this wasn’t the case even 2-3 years ago.


I wondered about that, too, but I'd be hard-pressed to say whether SQL injection vulnerabilities are more or less common if you embed SQL in the front-end vs. in a back-end service. I supposed they would be slightly easier to identify and exploit in a more modern front-end like React because the source code is actually viewable to users.


Everybody is focused on literally replicating the data model and using SQL which obviously won't work.

But the idea is directionally correct. Replicate the subset of data the user has access to and wants locally and work on it disconnected, then sync changes asynchronously to server. This is difficult, but necessary for fully responsive and collaborative applications.

Check out https://replicache.dev for a productization of this idea (disclosure: this is my product). We will eventually add a SQL frontend for the cached data.


I’ve seen replicache before and definitely very interested to use it. Will replicache be self-hostable in the future?


As of a month or so, it's even better: Replicache has no server side component at all. It's client-side software that talks directly to your server. See: https://twitter.com/replicache/status/1359690468892626944?s=... for more.

(We've shipped this now, but not re-announced it yet).


Yeah I went through the docs and saw that now it's client-side. Very exciting!


SQL on the front end? As a security engineer, this video sums up my thoughts precisely: https://youtu.be/31g0YE61PLQ

I'd want to call it "SQL Injection as a Service", but is it even injection anymore when the client can just send whatever SQL they want? Trying to filter and validate the SQL on the backend to restrict what a client can do would be an absolute minefield and would be so difficult to get right that you wouldn't save any time.


I'm in the "no" camp. First, I don't want to couple my data model to the front-end. Second, the front end applications I worked on, have a relatively small contact surface with the database. Mostly, they retrieve a bunch of records with a common property. There's no need to expose the full power of a query language. Third, the thought of uncontrolled, expensive queries should make every sysadmin break out in sweat.

I also expect fans of Django and the like to be opposed...


No one said it had to be coupled. You could, in theory, expose your API as a view of some kind.


isnt that pretty much.....GraphQL?


I had the same question a year ago and create a side project on that topic. The solution was is very close to the firebase approach, writing security rules. Those rules whitelist sql statements and give some some flexibility by defining query parameters and having a context object that contains userIds and so on. For example:

frontend sql statement as a json object:

  {
    select: { 
      name: field(Project, 'name') 
    },
    from: table(Project),
    where: equal(field(Project, 'ownerUsername'), '<my-user-id>'),
  }
and here a rule configured on the backend:

  const rules = [
    allow(authorized(), {
      select: {
        name: field(Project, 'name'),
      },
      from: table(Project),
      where: equal(field(Project, 'ownerUsername'), requestContext().userId),
    })
  ],
Sadly documentation is quite poor for now, but you can check it out here https://github.com/no0dles/daita

Edit: code formatting


Don’t know about web support these days but a lot of mobile and desktop apps use a pattern of having a local SQLite db which is “synced” to a remote database using a sync protocol.

That sync protocol could be anything (rest, rpc, graphql, or even xml changesets in zip files on webDAV - I’m looking at you OmniFocus)

This setup has the benefits of being able to work offline (just sync later when network is back) and the ability to perform local SQL queries to populate UI views. But comes with all the extra complexity related to synchronising local and remote database. Stuff like handling conflicts.


I actually wrote a blog post about this exact scenario. It takes ideas from the datasette project (https://simonwillison.net/2017/Nov/13/datasette/#Arbitrary_S...). My example opens a read-only sqlite db and exposes the entire SQL API through a single PHP file.

The post is here: https://ohdoylerules.com/web/sql-as-an-api/ The code is here: https://gist.github.com/james2doyle/9e4b2b4f17e33bfb236fbdaf...


Not SQL, but https://pouchdb.com/ looks like a better version of this.

Also not SQL, IndexedDB [0] Seems like a well-supported [1] document database built into the browser would beat LocalStorage in almost every way. That's what I've found in my experience at least.

Maybe SQL just isn't the right tool on either the frontend or the backend?

[0] https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_A... [1] https://caniuse.com/indexeddb


I use PouchDB for this exact use case and it works great. It solves the storage and replication problems both.

If you build it upon SQL, you would also need to create a CRDT schema to make replication sound. That's probably more work than just using REST/GraphQL/RPC.

Local data is king. Especially as the network connection degrades.


a thing a I tried, and ran out of steam with, was writing SQL queries in my client side code but statically extracting them with babel, so you have some code like this:

    const users = await sql.query`select * from users`;
    console.log(users);
and in production this becomes something like

    const users = await fetch('/query?id=19a1f14efc0f221d30afcb1e1344bebd');
    console.log(users);
and the query itself stays on the server so you don't have to deal with the problem of unbounded complexity / DoS.

This is the same approach Facebook uses for GraphQL. The only reason I gave up with this idea is that getting the developer experience right is hard work and it does introduce some very tight coupling!


https://blitzjs.com/ does something similar, where you can write server side code inside components and it turns it into an API


This doesn't help with optimistic UI updates though.

The article is less about remotely executing SQL, but more about having a relational data model as a cache in the frontend.

However, when sending the query through to the backend on a cache miss, would be useful to only allow queries that have been extracted like so.

Its similar to how Apollo does query [caching][1].

I really like it!

[1]: https://www.apollographql.com/docs/apollo-server/performance...


We've done this with octo-cli and OpenFaaS now.

Legacy codebases are hard to remove because it's difficult but once you turn every SQL call into an HTTP call... the remaining part is the logic which can be rewritten into something more modern.


How many people are actually using raw SQL on the backend? I thought people got tired of maintaining strings of raw SQL and migrated to query builders or ORMs.

> I usually end up with a bunch of Lodash (groupBy, filter, map, reduce) to shape the data I get from the server.

I mean, we are also doing the same thing with ORMs on the backend.


> How many people are actually using raw SQL on the backend?

I sure do. Nothing beats being able to just copy/paste a SQL query from a code base to your client to see what's going on. Then tweak that said query until it works as expected. When I do backend development involving the database a lot (and I do use a lot of views, functions and triggers), I spend more time in my database client than in my IDE.


I think ActiveRecord beats this, since the code ends up more readable, but you can still run it in the rails repl, and the generated SQL syntax is printed in both the webserver logs and the repl output.


> ActiveRecord beats this, since the code ends up more readable

It's subjective and I tend to disagree. Especially for very simple and very complex queries.

Also, unless you are a following a "code-first" approach and doing all your schema migrations through the ORM, you have to redefine your tables, columns and relationships a second time and keep them up-to-date with every change, which is a huge hassle.

Obviously if your app is a simple CRUD app, might be simpler to just use Rails/Django/Symfony with an ORM and embrace the code-first approach.


Rails has very robust and well documented generators if you find it's a lot of work to create the migration file and the model file when you're changing the schema (I usually just create them manually though). It's a pretty standard workflow for rails developers, and given that rails is used by a lot of companies that are not "simple CRUD apps" (like AirBnB, Stripe, etc.) I'd say it's definitely very useful in many scenarios.

In fact, I'd argue that the "code-first" approach as you call it is actually more useful, because rails gives you bindings for before/after commit hooks, validators that aren't supported by SQL, etc.

> Also, unless you are a following a "code-first" approach and doing all your schema migrations through the ORM

I've literally never heard of a rails team migrating their DBs manually. Everyone uses ActiveRecord because it's a joy to use and very well supported and documented.


It's pretty rare for an ORM to cover all possible scenarios. I just introduced another raw query to our database because our ORM isn't able to generate `insert into ... select ... where not exists (select ...)` which is crucial for one critical bit that needs to handle idempotent writes.


This is true, and there is definitely a place for raw sql, but in my experience ORMs cover most cases and I only have to write SQL queries for special cases.


for my latest contracted small web app project, I didn't see the need to get an ORM set up—I opted instead to just go for plain SQL statements in a single PHP file, even going so far as to use `?> ... <?` for "templating" HTML blocks, and a fat query string switch for the different subpages/GET/POST actions on the site. I'm sure almost everyone here, especially web developers, would absolutely cringe at what I've done, but for the scope of this small project it works extremely well. after my last job at a remote django/react shop, it was a breath of fresh air to build something "from scratch" without setting up and configuring large amounts of boilerplate, class structures, etc.—instead, the website is quite procedural, just like I would've written when I was teaching myself web development in high school in the mid 00s (except, without the gaping SQL injection holes). designing the web app this way let me build it quickly and rapidly iterate on features as the client requested, and he ended up extremely happy with the results.


I'm sure some did, but I got sick of the slowness and lack of expressive ability with every ORM I tried and just went back to SQL strings.


If you don't fully understand what happens under the hood with those ORMs and limit yourself to its, well limitations, then somebody is going to have a bad time. And in my experience too, with almost with all (read all that I tried) of them, some queries end up to be way easier to write yourself, and most frequent this is performance/resource issue.


There is PostgREST, which is just a thin wrapper REST api around a Postgres database:

https://postgrest.org/en/stable/


I thought of this when I first read the article title, and I think it's a decent compromise that mostly addresses the concerns in the other comments. Supporting truly arbitrary SQL seems too high-risk to allow from the front-end. PostgREST supports a simple subset in a HTML-like language that's fairly unlikely to have any unpredictable/negative consequences. I think it has limits for query time and result size built-in already too. You do have to get used to setting up the correct security constraints in Postgres natively though.

Strangely enough, I hardly ever see anyone really use the Postgres security checks. Postgrest is pretty much the only use case I've ever heard of.


This is such a useful abstraction.

I love working with PostGREST and have used it quite often for quick services (e.g. a vote button on a static site), internal tools (recently a covid checkin-screener), and for Proof of Concepts (postgres+postgis-powered full text search for address lookups in a webmap without using an external geocoder).

I personally have yet to use it for something with more than 200 users, but it sounds like others certainly have successfully. Supabase (https://supabase.io/) uses this for parts of their backend.


And its graphql counterpart, postgraphile (described elsewhere on this thread: https://news.ycombinator.com/item?id=26823819)


Sure, for the first 10% of the lifetime of a system. When you are starting, the schema is simple, there aren't a lot of special cases or bad abstractions, and things are easy.

10 years later, there are special cases on top of the special cases. The reality of compromising over and over when adding new features has added up and it's hard to remember the right way to get the correct currency converted total for an invoice (so you need to get the total in each currency, but not add the line items that are marked as 'deleted', then apply discounts, then convert currencies to USD, then add tax, then add shipping, then convert to the local currency, except when the delivery address is in Russia where legally you have to ...). Lots and lots of things are like this, and with no abstraction to ensure that these computations are isolated in a single component in the system you are going to get nowhere.


I‘m using AlaSQL for years in my projects ... is that the wrong approach or am I missing something here?

http://alasql.org/


Looks cool - wonder why its not more popular.

I think the read/write-through cache to a backend database is the missing piece...at least as far as I can see.


There's nothing wrong with using SQL "on the front end", just be sure to render the front end, server side... like like they did in the olden days.


I'm currently working on a project to fully expose SQL to clients in some reasonable fashion. It's not going to be a great fit for every application, obviously, but just allowing users to filter any data any way they want by default is nice. For example, in many apps reports have to be baked into the API and/or running reports basically gives people admin-level database access, with SQL access they can just be HTML/JS scripts.

Combined with row-level permissions (which my library abstracts), I think it's a very powerful and usable approach to apps where storing structured data is the main goal.

Article for an older iteration: https://dvdkon.gitlab.io/mocasys-dascore/


Yah; it’s a game changer. I played around a bit with it on small projects, with obvious limitations/caveats/footguns[0].

[0] - https://github.com/alexanderguy/pgress


> we should have an SQL database in the frontend

Although not SQL, this is essentially the approach Meteor takes where you have your same DB on the front and back end https://www.meteor.com/


Abstraction exists for a reason. It comes with trade-offs. Often they are worth it. To me, abstracting the DB layer seems like a no-brainer so often but I’ll admit that may be a bias due to getting minor heartburn when dealing with the DB layer in general...


You can abstract at the DB layer with views + stored procedures[1].

[1]: https://postgrest.org/en/stable/schema_structure.html#schema...


As many SQL Injection issues (and various other injection forms, including Javascript) as we have with backend code, fuck no we don't want SQL being issued from the frontend.

For one, permissions around SQL are already crap. It takes the smallest screwup to expose data in co-mingled databases. No need to make it even worse.

For two, at least if the SQL is on the backend, a fix for an exponential query DDOSing your DB is fairly quick; you don't have to worry about some client keeping a cached copy of the frontend around for months at a time.

Finally, if you let the frontend send SQL, you have lost any and all ability to do a static audit against the queries that will be run against your database - because you have no control over what the client does.


> For one, permissions around SQL are already crap

No, they aren’t. At least not Postgres, and AFAIK its true of every other major RDBMS, too.

Non-DB-specialists typical level of knowledge of DB permissions may be, but that’s a whole different problem.


> No, they aren’t. At least not Postgres

Yes, Row Based Security exists. But not by default. It also requires you to have one database user per external user. Something I (and most InfoSec professionals) wouldn't be keen on automating or managing, for fear of getting it wrong.

[EDIT] The following was removed from the parent post. It was in response to "you can't audit queries".

> No, you don’t

Citation needed. Yelling "No, you don't" without anything else is fscking useless in moving a conversation forward.

If the query is coming from the frontend, it's coming from a client that is outside of your control. I.E. you don't know what queries will come from the frontend, because the user can execute an arbitrary query.

If you take an effort to extract and bake those queries into a backend (perhaps calling it a proxy), then you're not actually emitting SQL from the front end, you're relying on a backend to emit the actual SQL used after performing a number of security checks. AKA, the status quo.


> It also requires you to have one database user per external user.

So?

You are adding a database entry per external user, along with data identifying their roles/permissions in the app, one way or the other.

You can either use custom code you’ve built on top of the DB engine to apply this, or you can use the far more battle-tested code in the database engine.

I’m not sure why so many developers believe reinventing the wheel on database security is more effective than understanding their tools.


How many millions, or billions, of database users are you comfortable managing?

0 millions for me.

I'll stick with the well tested and explored method of having a users table with foreign keys on the ID to other tables to identify data ownership.

The idea of using one DB user per end user is, at best, novel and untested. We don't know where or how it will fail. Scaling would be a real bear too. And I'm fairly certain an InfoSec professional would have kittens if they saw that attempted in a production environment.


> The idea of using one DB user per end user is, at best, novel and untested

Its a technique older in continuous use with RDBMSs and more battle-tested than, say, the web itself, and plenty of enterprises (even the kind that have apps that not only don’t provided direct DB access to their frontend, but don’t even provide their backend access to tables or views but mediate all external access to the DB through sprocs) have it as a security norm and (correctly) view apps that manage end-user access outside of the database as taking a relatively novel, untested, and risky approach.


Looks like you have some knowledge hole. Let’s start with MSSQL you can use AD groups and those are the one and only thing that database setups are concerned with. On the AD side users get added/removed all the time. I have personally managed this kind of system with nearly “0” overhead on the DB side. Group assignment is the maintained by team managers decentralizing the whole thing.

For non Kerberos Linux systems using LDAP to synchronize group membership is the only extra automation needed.

It is possible and fairly easy to manage a very large number of accounts IF done properly.


In practice, you would use something like SPARQL (which is already a W3C standard) as opposed to literally feeding raw SQL to the backend database. And SPARQL is designed for reasonable complexity, while still being more flexible than some purely ad-hoc thing like GraphQL (which only really deals well with tree-like hierarchies of data, not anything more general).


In my practical experience having worked with SPARQL and GraphQL, I would very much prefer GraphQL.

SPARQL is really nice for querying graph data, but I'd argue that for most applications tree-like hierarchies are good enough. Also GraphQL and associated frontend libraries are written to be consumed by browsers and results are much easier to handle than results from a SPARQL query, JSON+LD is not the easiest format to handle. The tooling around SPARQL is not great compared to GraphQL either.

Furthermore I'd argue that SPARQL queries are really hard to statically analyze / optimize on the Backend. Different SPARQL engines behave differently and whether you run your query against e.g. Stardog or Virtuoso can have vast performance differences. SPARQL being hard to analyze statically actually becomes apparent once you start thinking about authorization of resources, it is not a trivial problem to know to which resources a client should have access to or not.

And while it is true that you can model provenance, authorization and all kind of other niceties in SPARQL as well, you will likely be on your own building all those tools yourself.

I believe SPARQL is a great query language for public / open data, e.g. Wiki Data, Government data, etc, but in a business context I'd rather choose REST or GraphQL over it.


Whether you're making a REST API, a graphql endpoint, or a SPARQL (or even plain old SQL), you're still going to end up creating a backend to parse, transform, and apply security policies to the incoming queries so they can then be run against the backend.

The complexity and security costs just keep going up the more logic you shove into an untrusted client.


Not to mention:

- super hard to cache if any client can basically generate an infinite number of different queries.

- which dialect of SQL ? If you change DB or upgrade version, do your ask all clients to change as well ?

- how do you do query that involves several data sources ? Some response data come from redis + elastic search + postgres. That's why FB created graphql. That's also why it's so limited in scope, so that complexity is manageable.

- do you want to expose clients to implementation details such as OUTER JOIN vs INNER JOIN vs an array ? Or let them figure out tricky queries involving HAVING, DISTINCT, COUNT or subqueries ?


> do you want to expose clients to implementation details such as OUTER JOIN vs INNER JOIN vs an array ? Or let them figure out tricky queries involving HAVING, DISTINCT, COUNT or subqueries ?

No, you implement an exposed schema for each app (or possibly more granular) that consists of objects (mostly views, possibly sprocs) optimized for the planned access pattern of the app (or whatever client/component thr schema is for), including convenience views abstracting those things away to the extent appropriate for the use case. That’s been a widely known RDBMS best practice for decades.


Direct SQL from the client definitely isn't for everyone, but these problems aren't insurmountable.

Caching: This is true of any API allowing, for example, advanced search. If you can get by without it, great, but it's often a requirement.

Dialect: I don't think it's a terrible idea to bet on one (ideally stable and FLOSS) database, it's just a risk to be managed, and I think it can provide benefits in some cases. Or you could be fancy and use jOOQ's SQL dialect translator (this could also be used to create a "custom dialect" without some features).

Multiple data sources: You could use PostgreSQL's Foreign Data Wrappers. An unorthodox solution, but it allows seamless access with fast joins.

Complex SQL features: I don't think it's really a problem, in complex analytics code I think some users would actually appreciate having all these features. It can be a code quality challenge, but I think that's better solved on a personal level.

This approach does have real problems, like it being hard to prevent DoS attacks, but I don't think there are as many problems as it seems on first glance.


It seems that the typical software engineer doesn't give security a single thought.


Considering the number of times I've had to remind coworkers about sql injection during code reviews I agree.


It feels like you and the parent commenters are framing this as a choice between allowing public access to existing databases, or continuing to build new tools for accessing the data.

The way I understood the article, the ideas is 'can we expose a safe subset of SQL or fix the security issues instead of continuing to build entirely new systems'


Here's the root of my concern - safely and securely emitting SQL from an untrusted client is exponentially harder than running canned SQL queries from a trusted client.

People forget (or don't care) about security or safety when it lets them move faster.

"Move fast and break things," is a terrible security model.


I mean the guy is front end. Front end development doesn't involve any security.


I think you would be surprised how quickly and easily an attacker can automatically export vulnerabilities in web applications with programs such as Ffuf and SQLmap.


Not disagreeing. I'm just saying people on the front end never actually deal with security, hence this is the mentality that is the instigator of the suggestion of using SQL on the front end.


Hasura puts an GraphQL interface on top of my PG. It adds authorization, has plug in based authentication, comes with a decent SQL schema editor, has some migration system built in, and can serve as a proxy to other services. Use hasura-backend-plus and authentication (including email verification, forgot password, etc.) all just works.

Then I generate my client code from the GraphQL schema. I like strong typing so I use Elm on the front end. Now I have strong typing from SQL table to frontend code.

This is a game changer! I dont even want stringly typed SQL, not in the backend not in the front end. I want strong types guarding me against my own stupidity.


People interested in this can also checkout Postgraphile.

I’ve been using both the past few weeks and have preferred Postgraphile.


Could you explain what you preferred in Postgraphile?


I've tried building the exact same app using Hasura and Postgraphile after having built another app in Postgraphile:

  - Postgraphile has better Relay support, stuff like returning added edges, returning deleted nodes, good global ID support.
  - Hasura didn't support something like adding a currentUser field when using the Relay support. The pagination in the non-Relay mode was very basic and didn't comply with the Relay pagination spec.
  - Comparatively even when not using Postgraphile's Relay support it still supports both the Relay pagination API and the easy "nodes" pagination API.
  - With Hasura I initially liked the UI for management but found it tedious as I continued working.
  - I came across some bugs with Hasura's pagination API's, which was the final thing that made me decide to switch to Postgraphile.


This is already a thing in Clojure/Script, using datalog instead of SQL. Syncing datoms between Datascript (and/or Datahike [0]) on the front-end and Datomic on the back-end can be almost trivial, and there are libraries like Datsync for that [1].

[0] https://lambdaforge.io/2021/03/03/datahike-clojurescript.htm... [1] https://github.com/metasoarous/datsync


For offline apps, WebSQL would have been awesome. Thanks to some ivory tower types at Mozilla we can't have it [1]. We got SQLite in a computer on Mars but apparently that's not enough to base a web standard around because some people think that It WoUlD Be BaD iF wE oNlY hAve oNe implEmentation! Oh noooo!

(This despite the fact that every Linux distro uses the same kernel and Linux hasn't suffered some sort of monoculture meltdown.)

[1] https://nolanlawson.com/2014/04/26/web-sql-database-in-memor...


They also criticized it for poor "developer aesthetics". [1]

> We were resolved that using strings representing SQL commands lacked the elegance of a “web native” JavaScript API, and started looking at alternatives.

> In another article, we compare IndexedDB with Web SQL Database, and note that the former provides much syntactic simplicity over the latter.

IndexedDB and "syntactic simplicity". For anyone who has worked with that API, boy oh boy.

[1]: https://hacks.mozilla.org/2010/06/beyond-html5-database-apis...


No.

The front end is an untrusted computing environment and anything you give to your front end developers you also give to potentially hostile users.

The security implications of things like GraphQL are, frankly, bonkers, but nobody seems to notice.

This is in stark contrast with the server side, where you do have a trusted computing environment and you can, to an extent, give your developers what amounts to "root" access to the database.

I wrote a blog post on this a while back:

https://intercoolerjs.org/2016/02/17/api-churn-vs-security.h...


Yes. My point of pain has been ElasticSearch. The Json happy queries with far too many squiggly braces and square braces are just difficult to read.

A simple statement like SELECT Age FROM Employee WHERE name = 'Frank' turns into a monstrosity like

POST Employee/_search { "query": { "bool": { "must": [ { "match": { "name": "Frank" } } ] } }, "fields": [ "Age" ] }

Thankfully they added rudimentary SQL support so I could use it in ad-hoc queries.


I wish someone took the SQLite query parser and used it to output Elasticsearch's horrid language so that we could write queries in a language that our devs don't universally loathe.



I love this.


Smells like an XY problem. What they want isn't "SQL on the front end", it's an abstract machine that both the frontend and programmatic server-side systems interact with, and that abstract machine brokers the flow between these clients and the backend. Let that machine handle permissions, caching, etc, since that's hard, tedious, error prone, and less secure for humans to do. Just like how we stopped writing assembly.

I have to look into it, but some posters mention blitz.js, this may be just such a thing.


Yes. Last year I made an app that _kinda_ does this ( https://www.seamless.cloud ).

It solves the security issue by making you define the query first in Seamless, and then you get a REST API endpoint to run that query, with optional parameters that are run in a prepared statement.

Query (in the Seamless backend)

> SELECT * FROM posts WHERE author_id = ${author_id} LIMIT 50

And corresponding REST endpoint for that:

> GET https://primary.dbapi.seamless.cloud/somecompany/queries/run...

I dogfooded it while writing the serverless backend for BudgetSheet, and in practice I got tired of having to pre-define each query up front in the app before I could run it from the app. It was kinda painful to switch back and forth vs. just having the queries in the codebase. So... I have definitely been thinking a lot more about how to move the SQL to the client, but in a secure way (perhaps in cobination with the Seamless app that only allows "verified" queries to run in production, etc.) There is definitely some room here for a little innovation.


> Why hasn’t this been tried?

https://caniuse.com/sql-storage

Supported in Chrome 4, circa January 2010.

I'm forgetting the name of it but there is a spec, I think in wicg, for low-level/byte-level file access. This should make using emscripten to compile things like sqlite far more straightforward, to build whatever you want.

I like the shout out to streaming changes out of the database, which the author mentions in terms of redux & time travel abilities from having a wal. Server side systems like Debezium for doing this have been gamechanging. At the moment, the high-level file api has no support for watching for changes (https://github.com/WICG/file-system-access/issues/72). Maybe post 1.0 we might see progress. I don't believe the low-level api has anything for watching for changes.

From a more meta-assesment level of this article: I do hope we can stick with HTTP centric entities, personally. GraphQL with it's generic endpoint that all operations get sent to is, in my view, quite a bad development. But not irreconcileably so: bridges could be built. I'm definitely in favor of experimentation, trying things out. But I also think there's good reasons to keep entities and their urls around, to not abandon that. GraphQL right now doesn't seem to think about that or care about that, but I also think it could be reformed.


Doesn't appear it was deprecated for the usage that I propose.

https://softwareengineering.stackexchange.com/questions/2202...

>Byte-level storage

Would be cool.

---

https://hacks.mozilla.org/2010/06/beyond-html5-database-apis...

> From article: we think developer aesthetics are an important consideration...we were resolved that using strings representing SQL commands lacked the elegance of a “web native” JavaScript API, and started looking at alternatives.

Seems a bit short-sighted considering SQL is super popular today on the backend.


i think it's changed names (maybe) since i've last seen it? but here is the low level io spec:

https://github.com/WICG/storage-foundation-api-explainer


No, I don't want to use SQL on the front-end, nor on the back-end (except as a final translation). I want all my business logic, both front- and back-end, represented in data structures that use language as close to the domain-level as I can get while still allowing flexible querying and manipulation. I want queries that seamlessly mix blob files, SQL records, and code-as-data ("plugin" code properly source-controlled and tested, but that contains database-like IDs and hooks that allows it to be joined in-memory to SQL records). I want an abstraction layer that makes sense for the domain I'm working in, and I mean something specific when I say "abstraction layer". I want a basic level of self-awareness to the application where it knows its own topology and that can be used to generate API documentation, help files, API endpoints, etc. I want security to be an emergent property of this setup -- where exploits are preemptively disallowed simply because they are not sensible domain operations -- rather than trying to come up with an exhaustive list of all the clever ways someone could write exploitative SQL.

So, no thanks.


This sounds like what Blitz.js is doing: https://blitzjs.com/


Setting aside the security issues (legion!) and the implementation issues (manageable but hard!) you have to think about the maintainability of the codebase. Retrofitting schema changes into a codebase full of SQL statements fucking sucks. Every time I've dealt with this I've found that having an API on top of your DB is far better for maintainability.


each passing year, the web dev world is getting closer to reinventing PHP


PHP doesn't run in the browser. This is about client-side SQL.


yep, the invention of 'server-side client cache' is ahead of us


I think when you look at this problem from the viewpoint of local first software - with decentralized sharing and materialized views with a set of private/public relational event source logs - it really comes together well.

It doesn't exist yet, but when the tooling is there it's really going to make a splash.


With HashQL there actually was an experimental implementation of this idea

https://github.com/porsager/HashQL-todos-sample/blob/master/...


Not only experimental. I've been running HashQL in production for over a year now. I really wish I had time to write about it. I'm going to present it at https://speakeasyjs.com/ on the 7th of May, so I'll have to get something ready until then at least.


I wish I had time to try it out ;) Make sure to share that video of your presentation via the mithril channel on gitter so that I don't miss it.


I'm at a place where we've acquired a company that does something similar but through PHP/jQuery, and it's been a herculean effort to make any changes or improvements as there were no abstractions at hand. After many failed attempts at a re-tool/re-build, we've settled on showing a blank page that we then hijack to show a React page that's build on top.

New FE devs that come in now have a 6 month period where they must learn a new proprietary tech just so that we can circumvent what's already there, I would say that it's not a great idea if you have to work with other FE devs (who are not Full-stack) and if your app is of a significant size (YMMV I guess, we've had our own issues with GraphQL too).


Having worked with thick clients in the past, yes and no. There are a lot of caveats to data manipulation on client machines. You lose the luxury of caching, etc. Paging becomes more problematic, and there are a host of other issues.

A better solution would be a GraphQL like API that leverages SQL.


I noticed when developing a Single-Page Application that a lot of the backend code seemed to be merely taking requests from the app and reformatting them into database queries.

It wasn't always clear whether any given piece of processing logic belonged in the frontend or backend code, and trying to manage the contract between the two seemed like unnecessary busywork when there was already a well-designed interface between the backend and the database.

In the end this 3-moving-parts design did work, but I wished I had tried something like AWS AppSync[0] which is basically "managed GraphQL".

[0] https://aws.amazon.com/appsync/


Yes we do. Yes, it has been tried.

Fundamentally SQL the language is not designed for the added complexity and and I'm not aware of anything closer than GraphQL.

There are security, transnational, caching and computational questions to solve.

Edit: Maybe FirebaseDB et al are better analogs, actually.


This reminds me of a reddit comment from /r/programmer humor - someone was correlating music styles to programming languages. They decided MSSQL was the equivalent of a 6th grade recorder recital. More power to them I say!

I've been on the data side for a long time and have always wanted to explore front end but the API mandate sure did make that learning curve difficult, but for good reason.

It would be ideal if someone could iterate the existing frameworks to step something like GraphQL to a full blown front to back SQL tool.

Anyone who says SQL perms are crap doesn't know what they're doing with perms. I agree something is going to have to eloquently add controls between front and back.

Very thoughtful piece.


It's interesting to hear the complaints about SQL in this thread and consider how they apply to GraphQL. Yeah, there _are_ huge security and DOS threats when you allow the front end to send arbitrary queries in a rich query language. The work of mitigating those threats that will be tedious, meticulous, and vulnerable to error.

I personally know how to mitigate the risks better in a middle-tier GraphQL server than in a database, but I'm open to the idea that database security controls might be up to the job, too, if not now, then maybe in the future.

I think there's a lot of naiveté in this post, but I appreciate where the naiveté is taking us.


For those not familiar, check out Postgraphile.


Came here to say this. To expound a bit, you can use it as a highly pluggable/extensible express plugin which you can point at any postgres db and get all CRUD and other operations exposed in GraphQL, without writing any code.

You can easily hide or rename fields that you don't want exposed, override their behavior in Node, and add generated columns or fancy SQL mutations either in JS or directly in your DB and have them exposed as you'd expect in GQL.

With a commonly-used plugin, you can even do rich ORM-like queries directly from graphql, like:

    query {
      allPeople(filter: {
        firstName: { startsWith:"John" }
      }) {
        nodes {
          firstName
          lastName
          posts(filter: {
            createdAt: { greaterThan: "2016-01-01" }
          }) {
            nodes {
              title
              body
            }
          }
        }
      }
    }
More examples here: https://github.com/graphile-contrib/postgraphile-plugin-conn...

(Personally, I think their docs are good at telling you how to use it but fairly bad at showing how great the tool is)


But what about optimistic UI updates or offline support in the client. The frontend has no notion of the relationships between the entities, which means you will be making a lot more queries than you need to, or just ignoring optimistic updates (we may be fine for some apps).


As you note in your article, Apollo's gql client has optimistic UI, but it looks like a pain.

The Apollo cache seems to be powerful enough normalize the way you want, perhaps with some extra code: https://www.apollographql.com/docs/react/caching/cache-confi...

Apollo also claims support for cache persistence, eg to localstorage: https://www.apollographql.com/docs/react/caching/advanced-to...

I haven't used Apollo myself so maybe it's not as usable or powerful as it claims to be.


Yes, a client side DSL for querying data is superior to REST.

But the data model of a complex app is usually more than its sql database. So maybe something closer to gql with code gen for your database is more appropriate?


We are currently settling for something in between: EQL (using Pathom resolvers)

This gives front-end devs the flexibility of GraphQL, composability for free (EQL's DSL is just a data-structure you can manipulate easilty) and is database agnostic.

In fact, Pathom makes it easy to link different datasources and make them available in a uniform way to the front-end devs. All the while, resolvers (running on the backend) keep control and prevent malicious clients from doing harm.

Same issues you would have with GraphQL though: N+1 queries need special consideration when writing resolvers.


Netsuite does this with Suiteql, and Salesforce does this with SOQL. They're basically just REST apis where you can put sql in a post body. It's definitely nice from a consumer standpoint.


For a company we planned to acquire and integrate, one of our devs was wondering why the search requests the frontend made to their API suspiciously looked like raw ElasticSearch queries.

Two hours later that dev sent me a screenshot with our competitor’s production catalogue consisting of memes.

I told him to delete them so we could sign the purchase agreement without further ado.

To this day they don‘t know and I sometimes wonder how they ever got so far.

And this, ladies and gentlemen, is why you leave backend query languages to backends and just use GraphQL...


PostgRest allows doing something very similar to this but in a safer way: https://postgrest.org/


No. I want and need to do manky shit on the backend that I desperately need to hide from users or the front end. If I can't hide that then basically all my users get to see my underwear and that's no good for many reasons (e.g. exploits, abusing the api, etc).

and all the security can of worms. Seriously, if you don't respect the benefit of the text layer in JSON then you need to go have a look at why .NET Remoting failed. You want loose coupling here.

Perhaps I'm missing the point of the article?


My dream is:

* Every user in the web interface gets a sql database user.

* A change in the database schema should reflect an instant change in the AP GUI.

A dream that I have worked towards step by step in a few different projects. I am not sure about how it scales to millions of users. But if you have a a few 100 to a thousand users. It should be fine. Main problem i have found is row level security. The app should basically just be a graphical interface that can be configured. Through a config file.


Its worth noting, as most of the discussion here focuses on the imagined variation of the headline of “don’t we all want raw SQL access to our backend DB on the front end”, that the article isn’t about that at all and is actually about having an in-client SQL database with local copies of the relevant subset of backend records in the same shape, with a plug-and-play syncing mechanism rather than per app custom front end data request, caching, etc. logic.


Unpopular opinion: I'd like to have no code or as little code as possible on the frontend (think a tiny library - or a minimal browser) and define application flow / logic on the backend.

I don't want to have SQL on the frontend, I want to have no-frontend.

I'm not advocating to the good old days of backend + HTML and full page refresh, a small layer which smartly dynamically load different pages / submit requests would be acceptable.


I think this is the general philosophy of liveview / livewire.


I basically did this for tixit.me with mongo. I even wrote a module to parse mongo queries on the frontend so we could query our cache identically to our back end. https://github.com/fresheneesz/mongo-parse

It was a pretty big pain to write the middle ware necessary to make this happen. So I kind of agree.


> Why hasn’t this been tried?

Of course it's been tried. It's because it's been tried enough that people now tell you to hide SQL behind a middle layer...


Because you all stopped to use serverside rendering and let the browser do risky stuff.

If you want to cut down the overhead, use a simple PHP app and deliver prerendered HTML with JS just for the ajax calls. Its fast, reliable and tested in the wild. Servers are so powerful they can handle thousands of users at the same time.

You also don't drain the users batteries anymore, a win-win.


This is, effectively, what Meteor (JS) does; just with MongoDB instead of SQL. It embeds a mini-MongoDB JS client on the frontend to store cached data, queries are ran against that, and missing data is requested, streamed, and rendered asynchronously.

I'll scream from the rooftops: Yes. Meteor has its shortcomings, MongoDB being a big one, but this pattern of unification is so fantastic for every party involved that even reading about the "new-fangled acronym for building apps" disappoints me.

Now, we've got the JAM Stack (that's the new one, right?): you got your separate web frontend and service-oriented backend, probably communicating over GraphQL, then you've got your database language, probably SQL, but wait, lets put Prisma in front of that so that speaks GraphQL, but it'll be a different GraphQL schema than your frontend because you don't want to expose data, and jeeze maybe serverless functions as well, that sounds good, and i'll just stop typing here because the level of complexity and intricacy we've reached just to build a fully-featured web application is far beyond useful.

The new-generation tools we've built to make development easy for teams of 200 engineers now demand that every team have 200 engineers. Its a self-fulfilling prophecy; congratulations, you just universally raised the cost of software for every human on the planet.

A few days ago, I installed Nextcloud on a little $10/month Digital Ocean instance. I'm blown away by the performance this PHP app puts down. Blown. Away. Using Google Drive, a pretty damn "snappy" app relatively speaking, feels like walking through mud in comparison, and lets not even go down the rabbit hole of the billion dollar data centers and 56 core Xeon Platinum processors behind Google Drive.

Modern web stacks fail along every metric, except "how easy is this for our massive team of developers to maintain." (Of course, they don't actually "fail"; we developers just move the goalposts so 'master' passes, future changes might fail it but we can fix that). They're insane to get started on, hard to maintain, hard to monitor, and at the end of the day have far worse performance for end-users. We started with the monolithic systems which share a ton between server and client, decided those were a mistake, then overindexed in the entire opposite direction instead of iteratively addressing what was wrong with them.

I predict a move back in the opposite direction. The situation has become insane, and I want PHP, Meteor, and Rails. Fortunately, they're all still there, but Meteor has mostly fallen into maintenance mode (not to mention, MongoDB), and JavaScript really doesn't have another solution like this.


Can't agree more with this. I just can't believe how much time everything takes nowadays. I work in an inherited app which basically is just wizard of 3 steps. I'd say it's CRUD....but it is actually just C.

The amount of spaghetti react, redux, rxjs, custom form libraries etc is crazy. Validations fail everywhere, what we do in the frontend is inconsistent with what we do in the backend. Many screens have no url so you cannot share them. Despite just being forms it is incredibly slow and feels "heavy". Doing any change takes ages despite all the testing we have in place.

I know things can be done right with the SPA approach, but it takes a ton more effort.

I miss using rails/django, specially when the use case screams for it. But people want to have fun and do what facebook and google do, only problem is we have 0.001% of their resources.


We are collectively off in the wilderness at the moment.

Apollo GraphQL is the same company that brought us Meteor by the way.

If you take a look at what you need to write to get good optimistic UI updates it is crazy compared to what Meteor was.

https://www.apollographql.com/docs/react/performance/optimis...

But as you said, if you need optimistic UI updates and you have 200 engineers you will get it done, but developer efficiency seems to not be popular at the moment.


You can access your database directly from your view-code in ASP.NET Core Blazor. Best technology for the web ever created, period.


> We already have have SQLite in the browser via wasm that we could use for this

There is a W3C specification to bring SQL to the browser but it's no longer maintained. Chrome used to support this.

https://dev.w3.org/html5/webdatabase/


https://supabase.io/ ? MongoDB stitch?


If you have a brand new SaaS app where there is very little data (kilobytes to megabytes) and each user/team is separated from the others, you could have:

- a sqlite database for each user on the backend

- then on page load, have them download their whole sqlite db on the frontend

- sync the two with something like litestream.io compiled for webassembly


i think the use-cases for that are pretty few and far between, at least at the 'team' level. If you have a team, you need some way to manage members, which means adding/removing members. with this model, you effectively would not be able to remove members because you wouldnt know which members had downloaded an offline copy of the team's database.

for individual users, this might have a lot more potential.


This is the same as a removed member still having their browser cache containing team data.

The advantage of this design is you get an app that is extremely responsive and offline capable. Then you just need a reliable way to sync changes. And you always have the option of just re-downloading the entire team db again if it gets corrupted somehow. You could even have logic to indicate the sync status of each table so that you don't need to download the whole table...you could just populate it as your need the data...like a cache.


I am just throwing a number out there but I would hazard to guess at least 90% of applications would do just fine with RDBMS with an ORM and standard settings. You don't need to write your own SQL so you can get just what you want.

Let your backend application worry about the SQL.


I'd love to see the React store and renders be animated through streaming SQL materialization.

Think Flink or Spark or ksqlDB's views server-side, sliced and sent to tables client-side; where other views join these tables and animate React components


UNIX: treat everything like a filesystem

This post: treat everything like a SQL database

What's important is that those two EXTREMELY important paradigms of data organization basically don't support each other without kludges galore.

Hierarchical vs relational

But, fundamentally I agree with the poster, what we want are powerful proven data models and powerful proven access languages (SQL for relational, for hierarchical, I'll just throw out XPath which is basically the ONLY good thing from the era of XML that I liked).

Graph could potentially be considered, but IMO it has never proven itself in the practical marketplace. In particular it has no proven and stabilized query language in wide use by people that aren't domain experts, unlike users of the filesystem (basically everyone with a computer) and (less universally) users of databases.


I did write a go application that gives full querying capability to the clients for their own Sqlite3 instances, maybe I should open source it (Just wanted to see if there's any interest).


I work with a legacy ColdFusion application.

SQL is right there available to me when I'm working on the front end templates.

It's honestly faster to just use that ... often compared to typical API calls.


CouchDB was a really elegant implementation of this strategy without the SQL bit. Local database in all clients, synced back to a server whenever there’s connectivity.


If we used SQL we also would inherit it's well known shortcomings:

- no proper way to extract and reuse expressions - parameterizations are basically string concatenations - a lot that I forgot

There are of course also good things about it:

- it's declarative - it's easy to reason about (in its base form, no recursion, etc.) - it's widely used and known

So while GraphQL is something similar, and having solved some of SQL's shortcomings, it's new and has less mindshare. Also, last time I checked, the implementations in various languages (Python even) lagged behind the specification.

I'd say it's a step in the right direction, but tooling has to be improved I guess.



Yep, but not just for data journalism.


This is a bad idea for so many reasons.

The front-end and back-end data models are vastly different for any non-trivial application.


A day late, but... https://rsql.io


please also see "SQL is a better API language than GraphQL – Convince me otherwise" (twitter.com/simonw) https://news.ycombinator.com/item?id=22892946


TOTALLY!

A lot of the answers here commit the original sin of think SQL = RDBMS.

Expose the sql OF the rdbms is not ideal (security and all that), but none in this world say SQL demand to be the one of the rdbms.

You can create a SQL layer (alike GraphQL) that is "compiled" to calls that MAYBE are translated to a rdbms.

MAYBE.

If GraphQL is ok, sql is too.


Why isn’t “Just use SQL” the modern equivalent of “Just let them eat cake”?


Yes. I used to. Now I want to write backend code in the front end.


Not SQL, but maybe some revisited improved form of it.


Seems like SQL language needs a better security model.


No. Really. Don't.


Each horrible codebase I’ve seen in my career started by some early employee who thought « all we want is to expose the database over http ».

And it’s true this works. If you only have to manage a non-shared todo list.

tl;dr: nope.


There's a massive difference between exposing SQL as your API and exposing the database over a proxy. Your exposed SQL may have a separate model, or a more limited model, than what's actually in your database.


feels like overengineering to be honest


No


No


> We need a restrictive SQL parser to run server-side to restrict what can be run and prevent SQL injection. Maybe we need a query-builder/ORM to generate a safe intermediary SQL language in JSON so that we can validate it. Maybe taking some inspiration from Prisma’s type-safe data-mapper client could be used to help people write safe queries that won’t fail server-side validation unexpectedly.

What you are describing here is an API.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: