Nowadays, social networks play a crucial role in human everyday life and no longer purely associated with spare time spending. In fact, instant communication with friends and colleagues has become an essential component of our daily interaction giving a raise of multiple new social network types emergence. By participating in such networks, individuals generate a multitude of data points that describe their activities from different perspectives and, for example, can be further used for applications such as personalized recommendation or user profiling. However, the impact of the different social media networks on machine learning model performance has not been studied comprehensively yet. Particularly, the literature on modeling multi-modal data from multiple social networks is relatively sparse, which had inspired us to take a deeper dive into the topic in this preliminary study. Specifically, in this work, we will study the performance of different machine learning models when being learned on multi-modal data from different social networks. Our initial experimental results reveal that social network choice impacts the performance and the proper selection of data source is crucial.
User profile learning, such as mobility and demographic profile learning, is of great importance to various applications. Meanwhile, the rapid growth of multiple social platforms makes it possible to perform a comprehensive user profile learning from different views. However, the research efforts on user profile learning from multiple data sources are still relatively sparse, and there is no large-scale dataset released towards user profile learning. In our study, we contribute such benchmark and perform an initial study on user mobility and demographic profile learning. First, we constructed and released a large-scale multi-source multimodal dataset from three geographical areas. We then applied our proposed ensemble model on this dataset to learn user profile. Based on our experimental results, we observed that multiple data sources mutually complement each other and their appropriate fusion boosts the user profiling performance.
Wellness is a widely popular concept that is commonly applied to fitness and self-help products or services. Inference of personal wellness-related attributes, such as Body Mass Index (BMI) category or diseases tendency, as well as understanding of global dependencies between wellness attributes and users’ behavior is of crucial importance to various applications in personal and public wellness domains. At the same time, the emergence of social media platforms and wearable sensors makes it feasible to perform wellness profiling for users from multiple perspectives. However, research efforts on wellness profiling and integration of social media and sensor data are relatively sparse, and this study represents one of the first attempts in this direction. Specifically, we infer personal wellness attributes by utilizing our proposed multi-source multitask wellness profile learning framework — “WellMTL”, which can handle data incompleteness and perform wellness attributes inference from sensor and social media data simultaneously. To gain insights into the data at a global level, we also examine correlations between first-order data representations and personal wellness attributes. Our experimental results show that the integration of sensor data and multiple social media sources can substantially boost the performance of individual wellness profiling.
While there’s more focus than ever on providing a good user interface/user experience in new tech tools, that doesn’t just mean adding lots of “wow” functionality. To ensure the best UX, it’s important for hardware and software developers to also pay attention to ergonomic design. A focus on ergonomics ensures that the tool is adapted to foster comfort and convenience for the human user—that is, the human should inform how the tool works, not the other way around.
While the concept is simple enough, implementing it can be challenging, as designers must take account of factors ranging from device sizes to color palettes. Here, 16 members of Forbes Technology Council share tips to help developers ensure they’ve incorporated good ergonomic design principles in their products.
A digital twin is precisely what its name suggests: A digital copy of a physical object or system—even a human being. It may be a simple concept, but the potential applications are anything but. Through the ongoing collection and exchange of data, a digital twin can simulate and even predict the behaviors and reactions of its physical twin in a variety of conditions, providing invaluable insights to industries ranging from manufacturing to healthcare.
Digital twin technology allows businesses and organizations to test products and processes, study and predict how real-world conditions can affect physical objects and beings, and make well-informed, big-impact decisions with minimized financial and human safety risks. Below, 16 members of Forbes Technology Council share some of the fascinating ways industries and organizations are leveraging digital twin technology.
Anyone with an eye on the business world knows that tech professionals across industries saw their roles and responsibilities take a huge leap forward with the onset of widespread remote and hybrid work in the wake of the Covid pandemic. Even before that, tech leaders and their teams—both those working for tech-focused companies and those in other industries—had already been having an increasingly large impact on the ways their companies operated.
And while changing workplaces and markets are adding new challenges, there are also “evergreen,” ongoing issues that tech leaders will likely always need to wrestle with, even as they help fellow leaders learn to lean on the unique perspective a tech expert can bring to overall business strategy. Below, 16 members of Forbes Technology Council share the new (and ever-present) priorities they’re dealing with and why they’re so important.