Etegent Technologies, Ltd.

Etegent Technologies, Ltd. Etegent Technologies, Ltd., is a high technology, R&D focused company conducting state-of-the-art re

Explainable AI (XAI) gives ML practitioners tools to understand what their network is doing. Typically, this is done at ...
09/12/2023

Explainable AI (XAI) gives ML practitioners tools to understand what their network is doing. Typically, this is done at a full architecture level where XAI tools provide insight as to what features a model is using such as sets of superpixels. However, to do this within the system, i.e. pulling out features from sublayers, is a much more daunting task. Researchers at Brown University have put together at XAI toolkit to do just this. It is a python based library that currently only supports transformer architectures, but is an important step forward in continuing to provide high quality XAI toolkits.

Find their work at: https://arxiv.org/abs/2309.00244

Despite recent advances in the field of explainability, much remains unknown about the algorithms that neural networks learn to represent. Recent work has attempted to understand trained models by decomposing them into functional circuits (Csordás et al., 2020; Lepori et al., 2023). To advance this...

We are excited to welcome Ray Prather as our new Research Scientist starting 9/18/23! He will report to Jacob Ross and w...
09/08/2023

We are excited to welcome Ray Prather as our new Research Scientist starting 9/18/23! He will report to Jacob Ross and will be working out of the Dayton office!

Ray earned a Ph.D. in the mechanical engineering program at the University of Central Florida with a strong emphasis in computational fluid dynamics (CFD). In his research he has engaged as a PI and co-investigator in studies utilizing multi-scale CFD modeling as well as structural analysis to investigate pathological flows in patients, characterize flow in novel palliative surgeries, and study the implementation of already-in-use and new medical devices such as stents, heart valves, grafts and heart pumps. Utilizing similar flow and structural modelling techniques, he has collaborated with and mentored several undergraduate, graduate, and medical students at the University of Central Florida. He has experience in carrying out experiments for validation purposes, to understand basic physics or optimize manufacturing processes. At Arnold Palmer Hospital for Children, he has been engaged in a variety of projects in close collaboration with clinicians to find solutions and further our understanding of congenital heart disease (CHD). Concurrently he held a position as a postdoctoral scholar at Embry-Riddle Aeronautical University and occasionally taught courses as an adjunct professor at the University of Central Florida.

He is rather sporty, as he enjoys biking, basketball, tennis, football (soccer), and he has recently developed a strong passion for climbing. He is an avid reader and has an ever expanding bucket list of places to visit.

Residual networks (ResNets) are a long used, and well understood architecture that come standard in a variety of sizes (...
08/21/2023

Residual networks (ResNets) are a long used, and well understood architecture that come standard in a variety of sizes (18, 50, 101). However, finding the optimal choice for a given dataset is not always trivial. Adding to the issue, the optimal ResNet size may actually live in between these commonly used versions. Researchers have addressed this issue in a neural architecture search inspired (NAS) approach called the ResBuilder. This approach allows the network to insert or remove residual blocks during training to find the optimal size without the need for human tuning. They show that on many common datasets (CIFAR, MNIST, etc.) this approach outperforms starting from a standard size Resnet18. This technique has promise outside of ResNet structures as well and could provide value for any default ML architectures.

Read more at: https://arxiv.org/abs/2308.08504

It is well known that although transformers perform at SOTA levels on almost all tasks, they require an immense amount o...
08/04/2023

It is well known that although transformers perform at SOTA levels on almost all tasks, they require an immense amount of data in order to train. For typical benchmark problems, this is not an issue. However, in most practical applications the amount of data is orders of magnitude lower than datasets like ImageNet or COCO. This problem can be somewhat overcome by using transformer models that have been pretrained on large datasets and then fine-tuning a smaller dataset of interest. However, researchers at Carnegie Melon have looked into patterns within these high performing pretrained models. They find that by simply initializing several key pieces of the model (i.e. the query, key, value, and projection matrices) such that the dot product of the query and key matrices is a positive identify matrix and the dot product of the value and projection matrices is a negative dot product (both with some off-diagonal noise) that high level accuracy can be reached with limited data.

For a full breakdown of results and methodology you can find the paper at: https://arxiv.org/abs/2305.09828.

The intersection of dynamical systems (such as those in mechanical engineering) and machine learning is often overlooked...
07/25/2023

The intersection of dynamical systems (such as those in mechanical engineering) and machine learning is often overlooked. The high degree of generalization and freedom within neural networks makes them ideal candidates for modeling these difficult systems. Researchers at Trinity College, and Ohio State University have combined these ideas in their new paper "Flow map learning (FML) for unknown dynamical systems". Flow map learning is a portion of ML which aims to find numerical approximations to true flow maps that exist in dynamical systems. In others, the goal is to find where the system will be in a unit time.

To look at a full breakdown of their results you can find the paper at: https://arxiv.org/abs/2307.11013

The power of deep networks has been proven through a number of impressive results and algorithms that perform well beyon...
07/05/2023

The power of deep networks has been proven through a number of impressive results and algorithms that perform well beyond what smaller networks can do. However, researchers at King's College have also shown that this power might not be in necessarily be in the high number of free parameters. They show that you can map any deep ReLU network onto a three-layer network with the same performance. This can be incredibly useful for both deployment scenarios and for explainability.

To understand how to make your networks shallow check out their paper at:

We constructively prove that every deep ReLU network can be rewritten as a functionally identical three-layer network with weights valued in the extended reals. Based on this proof, we provide an algorithm that, given a deep ReLU network, finds the explicit weights of the corresponding shallow netwo...

Many data types naturally lend themselves towards using complex neural networks such as remote sensing and MRI; however,...
06/27/2023

Many data types naturally lend themselves towards using complex neural networks such as remote sensing and MRI; however, complex networks still remain vastly underutilized. Researchers at University of Munster attempt to bridge this gap by introducing a complex-valued transformer architecture. This architectures uses both complex valued attention and layer normalization. Although the demonstration on of this model is done only on the simple MusicNet dataset, this provides a strong jumping off point for future research into using complex networks for more difficult and interesting problems.

Find the paper at https://lnkd.in/e9za6WNJ

Most deep learning pipelines are built on real-valued operations to deal with real-valued inputs such as images, speech or music signals. However, a lot of applications naturally make use of complex-valued signals or images, such as MRI or remote sensing. Additionally the Fourier transform of signal...

Data augmentations are a powerful tool that can provide models with more robust, difficult, an varied training sets that...
06/19/2023

Data augmentations are a powerful tool that can provide models with more robust, difficult, an varied training sets that have shown to improve downstream performance. Along these lines it is often seen that more aggressive augmentations provide more powerful models, however providing too difficult samples too early can also cause a model to learn nothing. A way to deal with this is to utilize curriculum learning. The idea here is to smartly feed an algorithm samples and augmentations in a prescribed fashion such that they can optimally learn. This can lead to faster model training, and/or more powerful models.

This is just an example of the use cases of curriculum learning, for a more robust overview check out this review: https://arxiv.org/abs/2101.10382.

We are excited to welcome Shawn Simpson as our new Director of DoD Sales for NLign starting 6/26/23!Shawn brings over 20...
06/07/2023

We are excited to welcome Shawn Simpson as our new Director of DoD Sales for NLign starting 6/26/23!

Shawn brings over 20 years of experience working as business development and sales senior executive. He began his career honorably serving his community as a Baltimore County Police Officer and ultimately a major crimes detective.

He is an expert in Federal Government business development (BD) and sales, and has been sought out to stand-up technology companies Federal BD and sales teams by installing strong account planning, solution development, pipeline development and capture strategies. In every position he initiated, led and executed strategies that led to new business, organic growth, strategic alliances, and identification of acquisitions. He has worked his entire BD career at the C-Suite levels to market, shape, and develop business across broad spectrum of commercial and government sectors. He has deep experience working with the Department of Defense (DoD), Department of Homeland Security (DHS), Federal Civilian Agencies, and the Intelligence Communities (IC).

Personally, he is a proud husband and father of two daughters. Living in Colorado, Shawn and his family are avid skiers and he enjoys taking advantage of the great outdoors by hiking, fly fishing and hunting.

He is also an FAA multi-engine commercial pilot and Certified Flight Instructor (CFI). He has experience flying several aircraft from C172s to the B747-400. He holds a Bachelor’s of Science degree in Aviation Technology from the Metropolitan State University of Denver.

Several weeks ago we highlighted a paper about an architecture called LSKNet. However, the LSKNet relies on the Oriented...
05/22/2023

Several weeks ago we highlighted a paper about an architecture called LSKNet. However, the LSKNet relies on the Oriented RCNN framework (ORCNN) and so we felt it was important to highlight this higher level structure. ORCNN is an extension of works like Faster-RCNN (FRCNN) which defines two stage detection algorithms. Within these framework users can utilize different backbones that best suite their particular problem. ORCNN takes this idea and extends it to oriented bounding boxes by implementing an oriented RPN and Oriented RCNN. This allows any backbone to utilize oriented bounding boxes and produce oriented RoI's. This is a flexible and powerful framework that many SOTA aerial detection architectures are built within.

Find the full paper at: https://arxiv.org/abs/2108.05699

Current state-of-the-art two-stage detectors generate oriented proposals through time-consuming schemes. This diminishes the detectors' speed, thereby becoming the computational bottleneck in advanced oriented object detection systems. This work proposes an effective and simple oriented object detec...

05/15/2023

The standard approach of bounding box regression is to use the L1 Norm loss. However, this approach was developed to work on horizontal bounding boxes, and has not shown comparable performance when utilizing rotated bounding boxes.

A common way to overcome this is by converting the rotated bounding boxes into a 2D Gaussian Distribution. From this there are several methods one can employ to calculate the loss between the prediction and ground truth bounding boxes. Two good options are to use the Kullback-Leibler Divergence (a generalization of the squared distance), or the Gaussian Wasserstein Distance (derived from the Wasserstein metric). Both of these options are promising and useful to have in your toolkit if you are interested in rotated bounding box object detection.

We are excited to welcome David Chandler as our new Security Manager/FSO starting 5/15/23!David Chandler has been a Secu...
05/11/2023

We are excited to welcome David Chandler as our new Security Manager/FSO starting 5/15/23!

David Chandler has been a Security Manager, Industrial Security Analyst, and Facility Security Officer for the last 19 years with experience in a broad variety of security related disciplines in the Department of Defense and Intelligence Community domains. He has worked for a variety of companies ranging from very large to very small, and brings with him a diverse security background. He is a Veteran of the US Army and spent nine years as an airborne infantryman prior to his industrial security career. He is also currently looking into completing his Bachelor’s degree in Cyber Security.

He enjoys hiking in the mountains whenever possible with his Siberian husky Uhtred, and has been involved with softball and CrossFit outside of work.

05/10/2023

Contrastive learning (CL) and masked image modelling (MIM) are both methods of self-supervised learning, however these two methods have very different properties. Researchers have dived into these specific differences in a recent paper (https://arxiv.org/abs/2305.0072) where they see that contrastive learning better identifies longer-range dependencies and utilizes lower frequency signals making them superior at identifying shapes. Similarly, MIM proves to be more texture oriented. The stark contrast of the two shows the power they could possess if used together in a reasonable fashion. This is exactly what was done when these researchers showed that by simply adding the losses from CL and MIM you can exploit the pros of both methods. Although they employed is a simple approach to combining these ideas, it shows the distinct possibility of providing better self-supervised models by exploiting both of these techniques, rather than using just one.

SpectFormer is a new extension to the growing list of transformer based architectures. The primary difference being that...
05/01/2023

SpectFormer is a new extension to the growing list of transformer based architectures. The primary difference being that this network utilizes both spectral layers and multi-headed attention layers. The spectral layers take the image into the frequency domain and applies a learnable gating technique before returning to the spatial domain. They do this in a staged approach where as the data moves through the transformer blocks there are less and less spectral layers applied and more attention-layers are present.

The researchers compared networks which used either all attention layers, all spectral layers, spectral before attention layers, and attention before spectral layers and found that the networks with initial spectral layers performed the best on the ImageNet1k dataset.

For a full read, you can find the paper at https://arxiv.org/abs/2304.06446

It is well understood that neural networks perform incredibly well when they have a large number of parameters within th...
04/17/2023

It is well understood that neural networks perform incredibly well when they have a large number of parameters within the networks, even to the point of overparameterization (more parameters than data). When things become overparameterized, the idea is that they can easily overfit to the training data which should lead to poor generalization. So, the question becomes: why do networks generalize so well even in this overparameterized condition?

Researchers at the University of Oxford have found that modern networks have a sort of intrinsic inductive bias that can counteract the growth of overly complex functions, a sort of intrinsic Occam's Razor.

This fascinating results is discussed in their paper which can be found at: https://lnkd.in/ef8MdT6E

The remarkable performance of overparameterized deep neural networks (DNNs) must arise from an interplay between network architecture, training algorithms, and structure in the data. To disentangle these three components, we apply a Bayesian picture, based on the functions expressed by a DNN, to sup...

04/10/2023

There has been a lot of discussion on the value of two stage vs. single stage detectors, and transformers versus traditional CNN architectures. However, little work has been done to understand the model performance of these architectures when it comes to aerial imagery. Researchers at Mohamed bin Zayed University have looked into this question and found that transformers (DETR) perform best on large objects, while single-stage detectors (YOLOv5) perform best on small objects. Although just a single study, further understanding this can have a large impact on how ML practitioners choose which architecture will best suit their problem.

You can find this work at:https://arxiv.org/pdf/2211.15479.pdf

We are excited to welcome Chris Randall as our new Machine Learning Engineer starting 4/10/23!Although classically train...
04/03/2023

We are excited to welcome Chris Randall as our new Machine Learning Engineer starting 4/10/23!

Although classically trained as an economist, Chris is a data scientist specializing in designing analytical solutions to various business problems using machine learning, data visualization, and data storytelling. With him, Chris brings a decade's worth of experience working at the intersection of hardcore data science and traditional business analytics to interrogate data to find the story hidden within. Throughout his career, he has consulted for several fortune 500 companies including retailers, CPGs, and restaurant chains.

Chris holds a bachelor’s degree in business economics as well as a master’s degree in applied economics from the University of Cincinnati, where he teaches students enrolled in the same program today. This background enables him to seek and implement novel solutions to complex business problems.

In his spare time, he enjoys hiking, traveling, ice hockey, running, and spending time with his family and son.

We are excited to welcome Steven Baker as our new GEOINT SAR Analyst starting 4/10/23!Steven is originally from northern...
03/29/2023

We are excited to welcome Steven Baker as our new GEOINT SAR Analyst starting 4/10/23!

Steven is originally from northern Kentucky and after 10 years on active duty, returned to the area to be closer to family. In his free time, he enjoys playing the bagpipes with a local band, cycling, and spending time with his family.

Steven is an accomplished geospatial-intelligence imagery technician in the Army Reserves and is joining Etegent after completing a tour of active duty. He’s an open-minded hard working team player and looks forward to getting started.

Address

5050 Section Avenue, Ste 110
Norwood, OH
45212

Opening Hours

Monday 9am - 5pm
Tuesday 9am - 5pm
Wednesday 9am - 5pm
Thursday 9am - 5pm
Friday 9am - 5pm

Telephone

(513) 631-0579

Alerts

Be the first to know and let us send you an email when Etegent Technologies, Ltd. posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Share

Share on Facebook Share on Twitter Share on LinkedIn
Share on Pinterest Share on Reddit Share via Email
Share on WhatsApp Share on Instagram Share on Telegram

About the Company

Etegent Technologies, Ltd., is a high technology, R&D focused company conducting state-of-the-art research in a range of areas, including (but not limited to)


  • Automatic target recognition utilizing radar, LADAR, image, vibrometry and other data types

  • Health monitoring of turbine engines and other assets

  • Nondestructive inspection data management and mining