Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting -- they're going after the various log management companies (Scalyr, DataDog, Splunk, Sumo Logic, etc.).

Figured this was bound to come eventually since it's a very very big market and their basic CloudWatch product was lacking in many ways. It's not like Amazon to let an ecosystem eat their lunch.

Few things stand out:

(1) Per-query pricing seems...odd? Likely a good deal for small folks with a low volume of logs (i.e. just need to check actual AWS infrastructure logs vs. application logs), but if you have any actual volume this gets absurdly expensive ($0.005/GB scanned = $5 per query if you need to scan a terabyte. Large enterprises ingest multiple terabytes per day.)

(2) The quote "I pick the first one, click Run query, the logs are scanned and the results are visible within seconds" doesn't sound terribly promising performance-wise. "Seconds" is an eternity in the log management world.

Still, super interesting!



What I value from DataDog are the vast number of integrations available. Want to monitor and alert on prometheus endpoints? No problem, just set up a little config. Want to monitor your EC2 nodes? No problem just link up your AWS account. Want to monitor k8s, etcd, nginx? Cool, no problem, there's a thing for that too.

If Logs Insights can match the simplicity of integration that DD and the like services provide, then those services had best watch out. On the other hand, DD's dashboards are pretty slick and I can't imagine AWS's utilitarian UI/X ever competing. I wonder if that is a big enough differentiator. DD can get really expensive as well, but I'd love to see some comparisons on price.


What’s wrong with seconds when implementing / running a new query?

Obviously disk iops are the bottleneck with such a system, so encouraging folks to conserve them by charging for bytes scanned seem like a good measure.

Won’t an enterprise implementing this themselves or using third party tools have to scale out their index nodes based on the amount data queries are scanning?


"so encouraging folks to conserve them by charging for bytes scanned seem like a good measure."

I wonder if it warns you about potentially expensive button clicks. A little cartesian join, and...ouch.


It’s not powerful enough to do a Cartesian join (yet?). Most you can do is scan once over a big dataset.


Hmm. I wonder if an "index only" query charges for the scan of the index, or a full table scan.


AFAICT, there is no indexing. It appears to just scan the logs as necessary. Maybe indexing will come later.


Bigquery has a cool function like this where it will tell you the cost of the query before you run it.


It'll be interesting to see what the pricing numbers actually mean.

Say my app generates 1GB of logs per day. If I create a dashboard showing hits/errors for the last 24 hours and have the dashboard refreshing every 5 minutes it might cost me $1.44 per day (0.005 * 12 * 24).

By comparison list price for Sumo Logic is $90/month for 1GB/day of data (stored for 30 days) with queries being "free" (although there are limits to the number you can do).


Definitely will be interesting. I'd say this -- for sure, $1.44 per day is better than $90/month, but is it that much better? (Also, $90 is on the high end -- Scalyr charges more like $50/month for 1GB/day disclosure: I'm a co-founder, though no longer there).

For $50-$100/month, you get basically unlimited querying, unlimited dashboards, etc., vs. $43.20 from Amazon for a single dashboard that refreshes every 5 minutes in your example.

Looking at the actual numbers now, there's no way this is targeting the same types of users at this pricing level. It's gotta be aimed more at folks looking at meta-logs for AWS services, the volume of which is going to be much less than actual application logs. Otherwise I can't see this being competitive.


With my logging setup right now (ELK + Fluentd) org-wide we have ~120 dashboards, and the overall setup costs us about $750 per month in resources. This would be a 700% increase for us. A 2x-3x increase would be worth it I think, but not that much.


> Per-query pricing seems...odd?

Could also be to disuade people from (ab)using it instead of proper metrics. Pretty sure SumoLogic lets you run queries on a cron and use the results for alarming (IMO not a great idea, seen it go wrong). And if there's an API, people will automate it and use it for god knows what. So charging for it seems like a smart enough move, iff the price is sane/competitive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: