This is part of a series of posts I’m writing as I prepare to attend Cloud Field Day 6. There are a total of eight presenters planned for CFD6, and I am going to cover two vendors per post. My goal is to have a basic understanding of each vendor’s product portfolio with a focus on cloud related products. Some of these vendors I am already familiar with, and others are new to me. In this post we are going to turn to the networking side of things with a closer look at Lucidlink and ExtraHop.
I have never heard the name Lucidlink prior to seeing their name on the CFD6 page. That’s not a dig. There’s a TON of companies out there, and I tend to know the ones I worked with through consulting, or have giant signs at conferences, or have presented at a TFD event. It is thus with fresh eyes that I come to the world of Lucidlink, a company that will continually flummox my capitalization instincts by choosing not to capitalize the Link half of their name. This is entirely my fault and not theirs for choosing a sane way to write their name.
You would think that Lucidlink was a networking company, in fact that’s exactly what I thought when I wrote the introduction to this post. Then I went and read the documentation on their site and realized that they are a storage company. They are a storage company that relies heavily on networks and the cloud, so I suppose it makes sense.
Lucidlink has a single product, a distributed file system using object-based storage like Amazon S3. On the client side, there is a Lucidlink client that runs and mounts to the local file system. Their file system is log centric, like zfs, and separates the data and metadata planes from each other. For the data plane, Lucidlink is using object-based storage, but it is not writing each file to an object in an S3 compliant bucket. Instead, they have created a data overlay that breaks files into uniform chunks, and each of those chunks are stored as an object. This allows Lucidlink’s file system to treat the objects more like blocks on a traditional drive.
Since each file is scattered across multiple objects, Lucidlink is heavily reliant on its metadata system to describe the status and location of each chunk that makes up a file. The client software keeps a full copy of the metadata for the file system in an eventually consistent model. Data is retrieved directly from the each bucket location to the client, removing the need for a centralized file server. The client also maintains a local cache of files and can be configured to a maximum size. The presumption here is that each client will have constant connectivity to the metadata service and the bucket locations storing its data.
The solution supports all the usual suspects like encryption, compression, and caching. Being based on object storage for the data backend, the solution should be able to scale to whatever the limitation is for their metadata service.
There are many questions I have for Lucidlink. Let’s start with the other competitors in the field. What about existing solutions like Dropbox, Box, and OneDrive? If the primary play here is for end users, then I am not sure there is sufficient differentiation. All of those solutions have a mature client, syncing capabilities, and the ability to selectively determine client-side caching; not to mention their collaboration and sharing capabilities. If their target is more for servers and applications, then why would I want to use this over native solutions in a given cloud? Especially for container based workloads that will need to warm up the storage cache each time a new container spawns.
I assume that the CFD6 presentation is primarily going to focus on explaining their solution to us. It is an elegant solution, but I really want to know where they feel the business fit is for their product. Who is their target market? How are existing clients using the solution today? I’m sure we could easily get buried in the technical weeds, but I’d like to take a more pragmatic view of the solution.
Here’s the actual networking company! And it is one that I already know a bit about. About four years ago an ExtraHop rep came to the consulting company I was working for. They were interested in partnering with us, and I got to test drive their software in our lab. While I really liked the software, I wasn’t in a position to directly influence partnerships. On the bright side, their analysis product helped me troubleshoot some thorny issues we were having around DNS and Active Directory. Thanks for that ExtraHop!
That was for their Application Performance product, but I am noticing on their website that they have a product category specifically for cloud. What’s going on over there? Let’s have a little looksie.
The product is called Reveal(x) Cloud, and can we just stop here for a moment? I took a fair amount of math(s) in high school and college. Is ExtraHop trying to make a reference to a function, as in f(x) = y? Or are they trying to reference programming languages where you called a function called Reveal and pass it an argument (x)? I honestly don’t know, but I immediately dislike the product. You’ve confused me and made me think I might be on the outside of a clever joke. Don’t do that. Your product reveals what’s going on in the network of a cloud? Call it Cloud Reveal and be done with it.
[ANYWAY]
Cloud Reveal - as it will now be known on my site - is basically a SaaS version of their existing ExtraHop analysis engine that ingests packets from your public cloud provider of choice. Provided of course that your public cloud provider is AWS or Azure, which based off current stats is a fairly strong guess. AWS has VPC Traffic Mirroring and Azure has Network vTap, both of which are able to provide mirrored packet flows to the specified destination - Cloud Reveal - for collection and analysis. Cloud Reveal uses machine learning and standard detection methods to determine if there is something in the packets that warrants your attention. ExtraHop does this sort of thing really well, and it makes sense that they would plug into the public clouds now that the capability is there to do extensive, agentless packet capture.
Does this really check the box as a cloud service? It’s SaaS to be sure, and I like that it is agentless. Of course there is an associated cost with running something like Azure vTap for every virtual machine in your environment. Current preview pricing for vTap is about $9 a month per VM, with the pricing set to double once the product goes GA. AWS VPC Traffic Mirroring is about $11 a moth per VM. Layer on the cost of the ExtraHop solution, and things could get pricey for a large organization with 100s or 1000s of virtual machines running in the cloud.
I’m also curious how this handles Kubernetes managed services like AKS and EKS. Both run inside a Vnet and VPC respectively, but intra-node traffic wouldn’t leave the virtual machine, so it would not be captured. And what about non-VM services like AWS Lambda, Azure App Service, AWS Network Load Balancer, Azure Application Gateway, and others? Are all of those included in the ExtraHop offering? Cloud is more than just VMs, and the network goes well beyond simple IaaS components. Tell me how you are protecting and monitoring all of those assets and I’m all ears. Otherwise you’re just updating your product - a very good product mind you - to work in someone else’s datacenter.
I’m hoping that ExtraHop talks more about their Cloud Reveal product and how it goes beyond typical VM monitoring. Show me that this is a solution designed for cloud-native workloads, not just workloads that have been moved to the cloud.
Cloud Field Day 6 is happening September 25-27. There will be live-streamed presentations from both of these vendors. If you’d like to join in with the conversation just use the #CFD6 hashtag on Twitter. All of the delegates will be watching that tag and asking questions on your behalf!
October 18, 2024
What's New in the AzureRM Provider Version 4?
August 27, 2024
Debugging the AzureRM Provider with VSCode
August 20, 2024
State Encryption with OpenTofu
August 1, 2024