Nowadays, social networks play a crucial role in human everyday life and no longer purely associated with spare time spending. In fact, instant communication with friends and colleagues has become an essential component of our daily interaction giving a raise of multiple new social network types emergence. By participating in such networks, individuals generate a multitude of data points that describe their activities from different perspectives and, for example, can be further used for applications such as personalized recommendation or user profiling. However, the impact of the different social media networks on machine learning model performance has not been studied comprehensively yet. Particularly, the literature on modeling multi-modal data from multiple social networks is relatively sparse, which had inspired us to take a deeper dive into the topic in this preliminary study. Specifically, in this work, we will study the performance of different machine learning models when being learned on multi-modal data from different social networks. Our initial experimental results reveal that social network choice impacts the performance and the proper selection of data source is crucial.
Wellness is a widely popular concept that is commonly applied to fitness and self-help products or services. Inference of personal wellness-related attributes, such as Body Mass Index (BMI) category or diseases tendency, as well as understanding of global dependencies between wellness attributes and users’ behavior is of crucial importance to various applications in personal and public wellness domains. At the same time, the emergence of social media platforms and wearable sensors makes it feasible to perform wellness profiling for users from multiple perspectives. However, research efforts on wellness profiling and integration of social media and sensor data are relatively sparse, and this study represents one of the first attempts in this direction. Specifically, we infer personal wellness attributes by utilizing our proposed multi-source multitask wellness profile learning framework — “WellMTL”, which can handle data incompleteness and perform wellness attributes inference from sensor and social media data simultaneously. To gain insights into the data at a global level, we also examine correlations between first-order data representations and personal wellness attributes. Our experimental results show that the integration of sensor data and multiple social media sources can substantially boost the performance of individual wellness profiling.
Venue category recommendation is an essential application for the tourism and advertisement industries, wherein it may suggest attractive localities within close proximity to users’ current location. Considering that many adults use more than three social networks simultaneously, it is reasonable to leverage on this rapidly growing multi-source social media data to boost venue recommendation performance. Another approach to achieve higher recommendation results is to utilize group knowledge, which is able to diversify recommendation output. Taking into account these two aspects, we introduce a novel cross-network collaborative recommendation framework C 3R, which utilizes both individual and group knowledge, while being trained on data from multiple social media sources. Group knowledge is derived based on new crosssource user community detection approach, which utilizes both inter-source relationship and the ability of sources to complement each other. To fully utilize multi-source multi-view data, we process user-generated content by employing state-of-the-art text, image, and location processing techniques. Our experimental results demonstrate the superiority of our multi-source framework over state-of-the-art baselines and different data source combinations. In addition, we suggest a new approach for automatic construction of inter-network relationship graph based on the data, which eliminates the necessity of having pre-defined domain knowledge
The technological revolution marked by the shift from using a mouse and keyboard to touch screens has transformed our interaction with devices. In this rapidly evolving landscape, humans communicating with familiar applications using natural language has emerged as the most intuitive and effortless solution to bridge the gap between humans and machines, making life easier for customers and expanding the market. OpenAI, a trailblazer in AI development and advocacy, has ventured into the realm of venture capitalism through its subsidiary, the OpenAI Startup Fund, dispersing substantial investments to four promising DeepTech startups: Descript, Harvey, Mem, and Speak. Three of these companies harness the power of large language models (LLMs) to revolutionize human-machine interaction, raising the question: Why were these particular startups selected out of thousands of companies worldwide?
The technological revolution marked by the shift from using a mouse and keyboard to touch screens has transformed our interaction with devices. In this rapidly evolving landscape, humans communicating with familiar applications using natural language has emerged as the most intuitive and effortless solution to bridge the gap between humans and machines, making life easier for customers and expanding the market. OpenAI, a trailblazer in AI development and advocacy, has ventured into the realm of venture capitalism through its subsidiary, the OpenAI Startup Fund, dispersing substantial investments to four promising DeepTech startups: Descript, Harvey, Mem, and Speak. Three of these companies harness the power of large language models (LLMs) to revolutionize human-machine interaction, raising the question: Why were these particular startups selected out of thousands of companies worldwide?
As technology continues to evolve and become more integrated into our daily lives—and as the internet and social media have opened up new ways for consumers to publicly voice their opinions on products—the user experience has become a critical factor in the success of any tech product. Companies are now focusing on providing seamless and intuitive experiences that cater to their users’ needs and preferences.
This growing emphasis on UX has led to new trends expected to become table stakes in the next five years. Below, 16 Forbes Technology Council members explore some of the upcoming UX trends that will be crucial for the success of tech products and why they will be so important.