Zignal Labs utilizes leading technologies to gather and analyze massive amounts of data. Our platform is driven by an open and extensible technology stack that utilizes some of the most innovative technology available today.
“Everybody here is great. Everybody here is smart. Everybody is motivated. They’re passionate. They like what they’re working on. They’re curious, and engaged.”
- Amy Abito, UX/UI Designer
What We Build
What We Practice
- We’re engineering with a driven, creative, entrepreneurial mindset
- We prototype, test, adjust and deliver revolutionary solutions to our clients
- We create challenging solutions to some of today’s most complex data driven problems
- We work towards a unified goal of continuing to make our products exceptional and our customers happy
- We have fun and work as a team to constantly improve an already great product
What We Use
Scala is our primary back-end language and was used to develop most of our core business critical applications.
Spark is an amazing in-memory cluster computing framework that allows us to quickly process data in near real-time.
We utilize Node.js to Interface with other backend components and build application API’s. Node is lightweight and efficient and is perfect for our data intensive real-time applications.
Data Visualization – Interactive data visualization technologies like D3.js, Three.js and Highcharts create some of the most appealing parts of our products.
NLP is used within our dashboards to enhance our analytics and help our clients understand and condense their data in an automated way.
Backbone.js to develop single page applications and to keep some web applications synchronized.
Elasticsearch serves data to the front end. Elasticsearch is a Data serving layer for text search queries which drives our front end.
We use Docker and Mesos to run our underlying infrastructure and automate.
We utilize Apache Storm For realtime data enrichment, analytics and real time data delivery to the front end.
Apache Kafka for real-time analysis and rendering of massive amounts of streaming data in different phases of our data ingestion pipeline.