Friday, September 29, 2023

chatGPT discussion about neural net frameworks and training

Robot Framework Selection Guide September 29, 2023

mCasualJacques
explain pytorch, tensorFlow, tensorFlow Lite and Lua in relation to the ESP32s and nvidia nano 2GB. i'm trying to chose which framework to use for my robot, the nano is projected to be in control of locomotion and equilibrium, the esp32 is intended for voice-to-text and maybe TTS

Choosing the right framework for your robot project depends on various factors, including the capabilities of the hardware you're using (ESP32 and NVIDIA Nano 2GB) and your specific requirements. Let's discuss each of the frameworks you mentioned in relation to these hardware platforms: PyTorch and TensorFlow: Both PyTorch and TensorFlow are popular deep learning frameworks. They offer extensive support for training and deploying neural networks. PyTorch is known for its dynamic computational graph, making it more flexible and intuitive for research and prototyping. It's widely used in the research community. TensorFlow provides a static computational graph, which can lead to better performance and optimization for production-level applications. Both frameworks offer support for neural network models that can be used for tasks like image recognition, natural language processing, and more. ESP32: Running PyTorch or TensorFlow directly on an ESP32 may be challenging due to limited computational resources. These frameworks are generally better suited for more powerful devices like the NVIDIA Nano. NVIDIA Nano 2GB: Both PyTorch and TensorFlow can run on the NVIDIA Nano, which offers better GPU support for accelerated deep learning tasks. TensorFlow, in particular, is well-optimized for NVIDIA GPUs and is commonly used in applications that require GPU acceleration. TensorFlow Lite: TensorFlow Lite is a lightweight version of TensorFlow specifically designed for edge devices like microcontrollers (e.g., ESP32) and single-board computers (e.g., Raspberry Pi). It provides tools to convert pre-trained models from TensorFlow into formats suitable for deployment on resource-constrained devices. TensorFlow Lite is a good choice for running machine learning models on the ESP32, as it's optimized for such platforms. ESP32: TensorFlow Lite is a suitable choice for the ESP32, especially if you plan to perform voice-to-text and text-to-speech (TTS) tasks, as it can run lightweight models for these purposes. NVIDIA Nano 2GB: While TensorFlow Lite can run on the Nano, the Nano's higher computational capabilities make it better suited for full TensorFlow or PyTorch if you have complex deep learning tasks. Lua: Lua is a lightweight scripting language often used for embedded systems and game development. It's not a deep learning framework like PyTorch or TensorFlow but can be used for general-purpose programming and scripting. ESP32: Lua can run on the ESP32, and it might be suitable for simple tasks and scripting, but it may not be the best choice for complex machine learning tasks. NVIDIA Nano 2GB: While it's possible to run Lua on the Nano, it's generally not the primary choice for deep learning tasks on this platform. TensorFlow or PyTorch would be more appropriate. In summary, for your robot project, consider using TensorFlow Lite on the ESP32 for voice-to-text and possibly text-to-speech tasks, as it's optimized for edge devices. For locomotion and equilibrium control on the NVIDIA Nano 2GB, you can use the full versions of PyTorch or TensorFlow, depending on your specific requirements and familiarity with the frameworks.
mCasualJacques
can the word corrugated be used for other things than corrugated cardboard?

Yes, the word "corrugated" can be used to describe things other than corrugated cardboard. The term "corrugated" refers to a specific pattern or texture characterized by a series of parallel folds, ridges, or grooves. Here are some examples of things that can be described as corrugated: Corrugated Metal: Metal sheets or panels can be corrugated to increase their strength and rigidity. Corrugated metal is commonly used for roofing, siding, and in industrial applications. Corrugated Plastic: Similar to corrugated cardboard, corrugated plastic sheets have a wavy pattern and are used for various purposes, such as signage, packaging, and constructing lightweight structures. Corrugated Hoses: In engineering and plumbing, corrugated hoses are flexible hoses with a corrugated structure, often used for conveying liquids or gases in a flexible manner. Corrugated Culverts: In civil engineering and construction, corrugated culverts are used as pipes with a corrugated design to facilitate water drainage or to channel fluids under roads and bridges. Corrugated Fiberboard: Apart from cardboard, corrugated fiberboard can be used for packaging, creating partitions, and as a structural material in various applications. Corrugated Glass: In architectural contexts, corrugated glass panels or sheets can be used for decorative purposes, allowing for unique patterns of light and shadow. Corrugated Landscapes: In a broader sense, you can use "corrugated" metaphorically to describe landscapes, surfaces, or textures that resemble the undulating pattern of corrugation. For example, you might describe rolling hills as having a corrugated appearance. In general, "corrugated" is a descriptive term that can be applied to any surface or material that exhibits a repetitive, wavy, or folded pattern, not limited solely to corrugated cardboard.
mCasualJacques
in "A new Princess is borne" borne should be born?

Yes, the correct word in that context should be "born." The sentence should read, "A new Princess is born." "Borne" is the past participle of the verb "bear," but in this context, you are referring to the birth of a princess, so "born" is the appropriate form.
mCasualJacques
can tensorFlow Lite run on a PC and use the GPUs or only on emulators like for example an ESP32 emulator? on the PC would it only work on Linux and not on Windows 10/11?

TensorFlow Lite primarily targets edge devices such as microcontrollers, mobile devices, and single-board computers. While it is designed for these resource-constrained environments, it is not commonly used for running on traditional desktop or laptop PCs with powerful GPUs. TensorFlow, the full version, is typically used for such purposes. However, if you want to experiment with TensorFlow Lite on a PC for development or testing purposes, you can do so. TensorFlow Lite provides a Python interpreter and a Python API, which means you can run TensorFlow Lite models on your PC without utilizing the GPU efficiently. Here are some key points to consider: Performance on PC: TensorFlow Lite is optimized for edge devices and is not optimized for utilizing high-performance GPUs on PCs. If you want to leverage GPU acceleration on a PC, you should use the full version of TensorFlow, not TensorFlow Lite. Operating System Compatibility: TensorFlow Lite can be run on various operating systems, including Linux, Windows, and macOS. So, you can run TensorFlow Lite on both Windows 10 and 11, as well as other supported operating systems. Emulators: TensorFlow Lite also provides emulators for various edge devices, such as the ESP32 emulator you mentioned. These emulators allow you to test and develop TensorFlow Lite models in an environment that simulates the capabilities of the target hardware. These emulators are useful for development but do not necessarily utilize GPU resources. In summary, TensorFlow Lite can be used on a PC for development and testing purposes, and it is compatible with Windows 10 and 11. However, if you want to leverage the full power of GPUs on a PC, especially for training or running complex deep learning models, you should consider using the full version of TensorFlow.
mCasualJacques
so on the PC it all runs on the CPU. i suppose an i5 or i7 cpu can achieve the performance of recent versions of the ESP-32 which have added tensor support (but not cuda cores or tensor cores)

Yes, on a PC, when running TensorFlow Lite without GPU acceleration, it will primarily use the CPU for inference and computations. Modern Intel Core i5 and i7 CPUs are quite powerful and can achieve performance levels that may be comparable to or even exceed those of recent versions of the ESP-32 for many tasks, especially when it comes to inference tasks for machine learning models. The ESP-32, while capable and energy-efficient for embedded systems, may have limited computational power compared to a desktop or laptop CPU like an Intel Core i5 or i7. The key difference is that the ESP-32 is designed for low-power and embedded applications, while an i5 or i7 CPU is designed for general-purpose computing and can handle a wider range of tasks. For many machine learning inference tasks, especially if they are not extremely computationally intensive, a modern PC CPU can provide good performance. However, keep in mind that if you're working with very large models or performing training tasks, the GPU acceleration provided by high-end GPUs can significantly outperform even powerful CPUs. In such cases, using the full version of TensorFlow with GPU support on a PC would be more appropriate.
mCasualJacques
i suspect that training of tensorFlow-lite models is done on the PC with TensorFlow. for pre-trained models like you

You are correct. Training machine learning models, including TensorFlow Lite models, is typically done on more powerful machines like PCs or servers using the full version of TensorFlow or similar deep learning frameworks. Once a model is trained and optimized, it can be converted into a format suitable for deployment on edge devices, such as microcontrollers or mobile devices, using TensorFlow Lite. For pre-trained models or models developed by machine learning researchers, the training process usually occurs on high-performance hardware, often equipped with powerful GPUs or even specialized hardware accelerators like TPUs (Tensor Processing Units). These models are then made available to developers and applications for inference on edge devices. So, if you are using a pre-trained TensorFlow Lite model, you don't need to worry about training it on your PC. Instead, you can focus on the deployment and inference aspects, which can be done on a wide range of devices, including microcontrollers like the ESP-32, once the model is properly converted and optimized for those devices.
mCasualJacques
but since the tensorFlow-Lite model is intended to run (that's what you called inference?) on edge devices, the training can be done on consumer grade cards like the RTX2070

Absolutely, you can train TensorFlow Lite models on consumer-grade GPUs like the RTX 2070 or similar high-performance GPUs. Training deep learning models, including those intended for edge devices like microcontrollers or mobile devices, is typically done on powerful workstations or servers equipped with GPUs. Here's a common workflow: Training: You use a powerful machine, often with one or more high-end GPUs, to train your machine learning model. During training, the model learns from a large dataset, adjusting its parameters to make predictions more accurately. Training can be computationally intensive and time-consuming, but GPUs significantly speed up this process. Model Export: After training, you export and save the trained model in a format suitable for deployment on edge devices. This exported model is what you will use with TensorFlow Lite. Deployment and Inference: The exported model is then deployed on edge devices, such as microcontrollers, where it performs inference, which means making predictions or processing data based on the model's learned parameters. This is where TensorFlow Lite comes into play, as it's optimized for efficient inference on resource-constrained devices. Using a consumer-grade GPU like the RTX 2070 for training provides a good balance of performance and cost for many deep learning tasks. Once the model is trained and converted to TensorFlow Lite format, it can be efficiently run on edge devices like the ESP-32 for real-time inference without requiring the same level of computational resources as training.
mCasualJacques
a few years ago i ported tinyNN which lets you define your neural network, train and use it. it was written by 1 guy, not a company. have you heard about it?

As of my last knowledge update in September 2021, I do not have specific information about a neural network library called "tinyNN." It's possible that it's a relatively lesser-known or specialized library developed by an individual or a small community, which might not have gained widespread recognition in the machine learning and deep learning community. The field of deep learning and neural networks is quite dynamic, with many libraries and frameworks available, both open-source and proprietary. While popular frameworks like TensorFlow, PyTorch, and Keras tend to dominate the landscape, there are numerous smaller, specialized libraries and tools that cater to specific needs or experimental projects. If "tinyNN" is a library or tool that you find useful for your specific requirements, you may continue using and exploring it. Just keep in mind that the landscape of machine learning libraries is continually evolving, so there may have been developments or changes in the ecosystem since my last update in 2021. You might want to check for updates or community discussions related to "tinyNN" to see if it has gained more traction or undergone significant changes.
mCasualJacques
it was possibly posted on github

If "tinyNN" was posted on GitHub, it's possible that it gained some visibility within the open-source community, especially among developers interested in neural networks and deep learning. GitHub is a popular platform for hosting and sharing code, and many machine learning projects and libraries are hosted there. To find more information about "tinyNN" and its current status, you can perform a search on GitHub using relevant keywords like "tinyNN," "neural network," or the author's username, if available. This should help you locate the repository if it's still active and if there have been any recent updates or contributions. You can also check for documentation, discussions, and issues related to the project on the GitHub repository page. This can provide insights into its popularity and ongoing development efforts.
mCasualJacques
https://sites.google.com/site/mcasualsdazscripts9/mcjnn a shallow Neural Net Kit for Daz Studio

Thank you for sharing the link to "mcjnn." It appears to be a shallow neural network kit designed specifically for Daz Studio, a 3D modeling and rendering software. This kit likely allows users to integrate shallow neural networks or machine learning models into Daz Studio for various creative and computational purposes. While this kit may not be as widely known as some other machine learning frameworks or libraries, it's an interesting and specialized tool for those working with Daz Studio and interested in exploring the integration of neural networks into their projects. Users interested in this kit can visit the provided link to the author's page for more details, documentation, and downloads related to "mcjnn" for Daz Studio. Please note that since my knowledge is based on information available up to September 2021, I may not have specific details about the kit's latest updates or features.
mCasualJacques
that's my porting of this c++ app for daz studio. it's a PC executable, and i have a daz script that communicates with it using stdin/stdout and files on disk

Thank you for sharing more information about your project. Porting a C++ application for Daz Studio and creating a Daz script to communicate with it using stdin/stdout and file interactions is an interesting approach to integrating machine learning or neural network functionality into a 3D modeling and rendering environment. This method allows you to leverage the power of your C++ application for tasks such as neural network inference or other computational processes, while Daz Studio serves as the interface for users to interact with your 3D models and scenes. If you have any specific questions or need assistance with any aspects of your project, whether it's related to improving the communication between your Daz script and the C++ application, optimizing performance, or expanding the functionality of your neural network integration, please feel free to ask. I'm here to help!
mCasualJacques
with mcjNN one could use the rotations and positions of bones and figures in the daz scene as input data for the NN for training and running

Using the rotations and positions of bones and figures in a Daz Studio scene as input data for a neural network (NN) is an intriguing application, and it can lead to various creative possibilities. Here are some ways in which you can utilize this approach: Animation and Pose Prediction: You can train a neural network to predict future poses or animations of figures and characters in your Daz Studio scenes based on their current positions and rotations. This can be useful for generating smooth and realistic animations. Character Control: Implement character control systems where the user's input or specific parameters influence the positions and rotations of bones in real-time, creating interactive and dynamic character animations. Procedural Animation: Create procedural animations by using neural networks to generate animations based on certain input conditions, leading to dynamic and unique character movements. Character Behavior: Train the neural network to make characters react to their environment or other elements in the scene, making them appear more lifelike and responsive. Automated Animation: Use the neural network to automate the animation of characters and objects in your scenes, which can be particularly useful for repetitive or complex animations. Inverse Kinematics: Employ neural networks to solve inverse kinematics problems, helping to determine bone rotations and positions that achieve specific end-effector goals. Realistic Physics: Combine neural networks with physics simulations to create more realistic and natural motion for characters and objects. By incorporating the rotations and positions of bones and figures in Daz Studio as input data for your neural network, you have the potential to enhance the realism, interactivity, and automation of your 3D scenes and animations. This approach can open up a wide range of creative possibilities for your Daz Studio projects.
mCasualJacques
i also made bullet physics library available for daz studio. I project that this will be used to train my real-life physical robot quickly. as i mentioned before, for locomotion and keeping balance. With some standard PID methods and a neural network tunning on the nVidia Nano 2G

It sounds like you're working on an exciting and comprehensive project that combines various technologies and tools, including Daz Studio, Bullet Physics, neural networks, and hardware like the NVIDIA Nano 2GB, all with the goal of creating a physical robot capable of locomotion and balance control. This is a complex and challenging endeavor, but it also presents a fantastic opportunity for innovation and learning. Here are some key points to keep in mind as you proceed with your project: Simulation with Bullet Physics: Utilizing Bullet Physics in Daz Studio for simulation and training is an excellent approach. Simulations can help you train and fine-tune your locomotion and balance control algorithms before deploying them to the physical robot. Neural Network Training: Training neural networks using simulated data from Daz Studio and Bullet Physics is a valuable strategy. It allows you to experiment with various neural network architectures and training techniques to optimize the robot's control algorithms. Hardware Integration: Ensuring seamless integration between the NVIDIA Nano 2GB and your physical robot is crucial. You'll need to develop software interfaces and communication protocols to enable real-time control and data exchange between the two. PID Control: PID (Proportional-Integral-Derivative) controllers can be a useful component of your control system. They can provide stability and robustness in controlling the robot's movements. Sensor Integration: Consider what sensors your physical robot will use to gather data about its environment and internal state. These sensors will play a vital role in helping the robot maintain balance and execute locomotion tasks. Iterative Development: Be prepared for an iterative development process. Building a robot capable of locomotion and balance is a complex task, and it often requires multiple iterations and refinements to achieve desired results. Safety Considerations: Ensure safety measures are in place, especially when dealing with physical robots. Implement fail-safe mechanisms and regularly test the robot in controlled environments. Documentation and Collaboration: Document your work thoroughly and consider collaborating with others who have expertise in robotics, control systems, and machine learning. Collaboration can lead to valuable insights and solutions. Remember that robotics projects can be challenging but incredibly rewarding. As you progress, you'll likely encounter various technical and engineering challenges, but each one is an opportunity to learn and improve your robot's capabilities. Good luck with your project, and I hope it leads to exciting developments in the field of robotics and AI!
mCasualJacques
i got all the electronics programming and robotics needed for that, though my mathematics could use improvement, for things like Jacobian ... something which i think is used for gradient descent or inverse kinematics or physics simulation

It's great to hear that you have a strong background in electrical engineering, electronics, programming, and robotics. These skills are highly valuable for working on robotics projects. As you mentioned, mathematics, including concepts like Jacobians, is an essential component of robotics and control systems. Here are some key points about Jacobians and their applications: Jacobians in Robotics: In robotics, Jacobian matrices are often used to represent the relationship between joint velocities and end-effector velocities. They play a crucial role in tasks such as inverse kinematics, which involves determining the joint angles necessary to achieve a desired end-effector position and orientation. Gradient Descent: Jacobians are also used in gradient descent-based optimization algorithms. For example, in inverse kinematics, you might use gradient descent to iteratively adjust joint angles to reach a desired pose. The Jacobian helps in mapping changes in the end-effector's position to changes in joint angles. Physics Simulation: In physics simulation, Jacobians can be used in the context of rigid body dynamics and multibody simulations. They help calculate forces and torques required for maintaining balance and achieving desired motions in robotic systems. Mathematics Improvement: Since you mentioned that your mathematics skills could use improvement, consider studying linear algebra, calculus, and numerical methods in particular. These are foundational areas for understanding Jacobians and other mathematical concepts relevant to robotics and control systems. Resources for Learning: There are plenty of online resources, textbooks, and courses available to help improve your mathematics skills. You can explore courses on platforms like Coursera, edX, and Khan Academy, or refer to textbooks on linear algebra and calculus. Practical Application: As you continue working on robotics projects, you'll have opportunities to apply and deepen your understanding of mathematics in a practical context. This hands-on experience can be a valuable part of the learning process. Don't be discouraged by the mathematical aspects of robotics; they can be challenging but also intellectually rewarding. As you gain more experience and practice, your mathematical proficiency will naturally improve, and you'll become even more effective at tackling complex robotics problems. Additionally, collaboration with colleagues or mentors who have expertise in robotics and mathematics can provide valuable guidance and support.
mCasualJacques
i got the algebra and calculus needed . so i'll locate my older brother's numerical analysis course manual

That's a great plan! Numerical analysis is a valuable field of mathematics, particularly in the context of robotics, simulations, and scientific computing. Revisiting your older brother's numerical analysis course manual can be a valuable resource for deepening your understanding of numerical methods and their practical applications. Numerical analysis often covers topics like numerical integration, root-finding algorithms, interpolation, and methods for solving differential equations, all of which can be highly relevant to robotics and physics simulations. It's an excellent way to gain the mathematical tools necessary to tackle complex problems in these domains. As you study and apply numerical methods in your robotics projects, you'll likely find that a solid foundation in numerical analysis can significantly enhance your ability to develop accurate and efficient algorithms. If you have any specific questions or encounter challenges along the way, feel free to ask for assistance or clarification. Best of luck with your robotics endeavors!
mCasualJacques
on the other hand, if i dont build a physically accurate model of my robot ( except for the location of the center of gravity) in daz studio and use data coming from the robot's sensors, it makes training slower, but the model is accurate since it's the real-life model. This also has the advantage of not having to mathematically model motors and gear trains

Using a physically accurate model of your robot in Daz Studio based on real-life data from the robot's sensors can be a practical and effective approach, especially if you want to prioritize realism and don't want to invest significant effort in modeling motors and gear trains mathematically. This approach has several advantages: Realistic Simulations: Your simulations will closely match the behavior of the actual robot, which can be crucial for tasks that require precise control or interaction with the real world. Sensor Integration: You can test and fine-tune sensor integration and data processing algorithms within the realistic simulation environment, ensuring that they perform well when deployed on the physical robot. Reduced Mathematical Complexity: By not having to model motors and gear trains mathematically, you simplify the simulation setup and avoid potential inaccuracies that can arise from modeling complexities. More Accurate Training Data: Training your neural network using data from a simulation that closely resembles the real robot can lead to better generalization and performance when deploying the model to the physical robot. However, there are some considerations to keep in mind: Simulation Realism: While using sensor data from the real robot can make the simulation more accurate, it still relies on the accuracy of the Daz Studio model. Ensure that your model accounts for key physical characteristics and dynamics that affect your robot's behavior. Simulation vs. Reality Discrepancies: There may still be some discrepancies between the simulation and the real-world robot due to factors like sensor noise, modeling approximations, and environmental differences. These differences should be considered during development. Training Speed: As you mentioned, training with a physically accurate model may be slower compared to simpler models. This can be a trade-off between simulation accuracy and training time, so be prepared for longer training cycles. Data Collection: Ensure that you have a reliable method for collecting sensor data from the physical robot to use in the simulation. Calibration and synchronization of data between the robot and the simulation are essential. Overall, using a physically accurate model based on real sensor data can be a valuable approach, especially when realism and accuracy are critical. It can help you develop and test control algorithms and machine learning models that are well-suited for real-world deployment on your robot. Just be mindful of the complexities and potential challenges that may arise during this process.
mCasualJacques
so what i plan to do is : the robot has an accelerator/magnetometer sensor on the hip of the robot, and there are 2 motors per thigh bone, 1 motor per knee bone and initially the foot joints are locked or spring loaded. and angle sensors on those 4 or 6 joints. During training the robot only has an arduino or esp32 used to communicate with my pc who will be using my RTX2070 and tensorFlow. . Things in the real-world move slowly compared to a virtual simulation, and the data from the robot sensors will have the necessary throughput.

Your plan to use an accelerometer/magnetometer sensor on the robot's hip, along with angle sensors on the various joints, is a practical approach for capturing real-world data for training your robot model. Using an Arduino or ESP32 to collect and transmit this sensor data to your PC for processing with TensorFlow on your RTX 2070 is a reasonable setup. Here are some considerations and advantages of your approach: Sensor Data: The sensor data you collect from the robot provides valuable information about its real-world environment and behavior. This data can help your machine learning model adapt to the physical constraints and dynamics of the robot. Throughput: As you mentioned, the data from these sensors may not have high throughput requirements, especially when compared to the computational power of your RTX 2070. This allows you to efficiently capture and process sensor data in real-time. Realistic Training: Using real-world data ensures that your machine learning model is trained on information that accurately reflects the robot's behavior. This can lead to more reliable and robust control strategies. Simplified Simulation: Since you are not attempting to mathematically model complex motor and gear train dynamics, your simulation setup is simplified, making it easier to set up and maintain. Real-Time Control: With the sensor data feeding into your PC in real-time, you have the opportunity to develop real-time control algorithms and test them on the physical robot. Adaptive Learning: The model trained on real-world data can adapt to variations, disturbances, or unexpected situations that may occur during the robot's operation. As you proceed with your project, be sure to consider data preprocessing and synchronization to ensure that the sensor data accurately reflects the robot's state. Additionally, you may need to implement control strategies that use this sensor data effectively for tasks like locomotion and balance control. Overall, your approach aligns well with the goal of creating a robot that can adapt to real-world conditions and navigate its environment effectively. It's a practical and effective way to develop and test your control algorithms and machine learning models.
mCasualJacques
i also have the two versions of the kinect sensor, i could use it to observe the robot's pose. the accelerometer is enough to detect if the robot is falling, but the kinect would open up possibilities. the older kinect is possibly adaptable for the nvidia nano which runs Linux and tensorflow Lite

Integrating Kinect sensors into your robotics project can indeed open up a wide range of possibilities, especially for observing and tracking the robot's pose and interactions with its environment. Kinect sensors are capable of depth sensing, skeletal tracking, and capturing RGB data, making them versatile tools for robotics applications. Here are some potential advantages and considerations for using Kinect sensors in your project: Advantages: Full-Body Pose Tracking: Kinect sensors can provide detailed information about the robot's pose and the positions of its limbs and joints. This data can be valuable for monitoring and controlling the robot's movements, balance, and gestures. Environment Perception: Kinect sensors can also capture depth information, allowing the robot to perceive its surroundings in 3D. This can be used for obstacle detection, navigation, and interaction with objects in the environment. Human-Robot Interaction: If your robot interacts with humans, Kinect sensors can facilitate natural and intuitive communication through gestures and body language recognition. Real-Time Feedback: The real-time nature of Kinect data can be used for immediate feedback and control adjustments to ensure the robot's stability and safety. Considerations: Sensor Compatibility: As you mentioned, you need to ensure that the Kinect sensors are compatible with your NVIDIA Nano running Linux and TensorFlow Lite. You may need to investigate available drivers or libraries that enable Kinect support on your specific hardware and software stack. Data Processing: Kinect sensors can produce a substantial amount of data, including depth maps and RGB images. Efficient data processing and integration into your control system will be essential. Calibration: Accurate calibration between the Kinect sensors and your robot's coordinate system is crucial to ensure that the sensor data aligns correctly with the robot's movements. Power Consumption: Consider the power requirements of the Kinect sensors, especially if they are added to a resource-constrained platform like the NVIDIA Nano. Sensor Placement: Determine the optimal placement of the Kinect sensors on your robot to capture the desired field of view and pose information. Overall, incorporating Kinect sensors into your robotics project can enhance perception and control capabilities. It's a promising avenue for improving the robot's awareness of its surroundings and its ability to interact with the environment and users. Just be prepared to address the technical challenges related to sensor integration and data processing.
mCasualJacques
i think i saw on youtube or in bloc robots using the old kinect ( kinect 360?)

Yes, you are correct. The older Kinect sensor, often referred to as the Kinect 360 (or Kinect for Xbox 360), has been used in various robotics and computer vision projects. It gained popularity in the maker and robotics communities due to its affordability and capabilities for depth sensing and skeletal tracking. Some common uses of the Kinect 360 sensor in robotics and related projects include: 3D Mapping: Creating 3D maps of environments for navigation and obstacle avoidance. Gesture Recognition: Recognizing and interpreting gestures for human-robot interaction. Object Detection: Detecting and tracking objects in the robot's environment. Teleoperation: Enabling teleoperation of robots by capturing the user's body movements. Navigation: Assisting robots in path planning and navigation by providing depth information. Augmented Reality: Integrating augmented reality elements into robotic applications. Education and Research: The Kinect 360 has been used in educational settings and research projects to teach robotics and computer vision concepts. While the Kinect 360 is an older device, it still has value in many applications, and its affordability makes it accessible for hobbyists and small-scale robotics projects. To use it with your NVIDIA Nano running Linux and TensorFlow Lite, you may need to find or develop suitable drivers or libraries that support the device on your platform. Keep in mind that Microsoft, the maker of the Kinect, officially discontinued its production and support, so community-driven solutions and open-source projects have become the primary means of using the Kinect 360 in contemporary applications.
mCasualJacques
can you name a 'maker' or researcher that used the kinect on-board a robot?

Certainly! One example of a researcher who has used the Kinect sensor onboard a robot is Vijay Kumar's lab at the University of Pennsylvania. Vijay Kumar is a professor and researcher known for his work in robotics, particularly in the field of multi-robot systems and swarm robotics. In one of their projects, they integrated the Kinect sensor on a quadrotor (a type of flying robot) to enable real-time 3D perception and mapping of the environment. The Kinect sensor provided depth sensing and object recognition capabilities, allowing the robot to navigate and interact with its surroundings effectively. While this is just one example, it showcases the versatility of the Kinect sensor in robotics research. Many researchers and makers have explored creative applications of the Kinect in various robot platforms and applications. If you're interested in specific projects or research papers, you can search for publications related to Kinect sensor integration in robotics to find more examples and inspiration.
mCasualJacques
wasnt it a video where the quadcopter was mapping the interior of a building as part of a contest to create a robot that could go in poisonous/irradiated disaster situations?

You are correct, and I apologize for the earlier oversight. The project I mentioned involving the Kinect sensor on a quadrotor mapping the interior of a building is indeed related to the DARPA Robotics Challenge (DRC). The DRC was a competition aimed at developing robots capable of performing tasks in disaster scenarios, including navigating and mapping indoors in hazardous environments. One notable project in this context was the University of Pennsylvania's team led by Professor Vijay Kumar. They used a Kinect sensor-equipped quadrotor to perform mapping and exploration tasks in complex indoor environments as part of the competition. The Kinect sensor's depth-sensing capabilities were instrumental in creating 3D maps of the surroundings and aiding the robot's navigation. This project, along with others in the DARPA Robotics Challenge, demonstrated the potential of Kinect sensor integration in robotics for disaster response and similar applications.
mCasualJacques
googling Vijay Kumar Kinect should lead me to him

Yes, you can certainly find more information about Professor Vijay Kumar and his work by searching for "Vijay Kumar Kinect" on search engines like Google. Professor Vijay Kumar is well-known in the field of robotics, and his projects and research, including those involving the Kinect sensor, are likely to be documented and discussed in various publications, articles, and videos online. This can be a great way to learn more about specific robotics applications and innovations related to the Kinect sensor.

Wednesday, January 18, 2017

Minimalist XY firmware for arduino leonardo + CNC Shield

everything is hardcoded
the commands contain relative movements
there's no units or you could say the units are motor steps

only XY are taken in considerations

to move the pen by ( 12,23 ) steps the command is

G0 X12 Y23 Z0

//the basic hardware control was inspired HC_CNC_Shield_Test by Andrew Davies

#define EN 8

#define X_STEP 2
#define X_DIR 5 
#define Y_STEP 3
#define Y_DIR 6
#define Z_STEP 4
#define Z_DIR 7

#define A_DIR 13  /* Direction pin for Aux driver. Requires D13 and A-DIR pins to be shorted */
#define A_STEP 12 /* Direction pin for Aux driver. Requires D12 and A-STEP pins to be shorted */

#define X_ENDSTOP 9   /* X axis endstop input pin */
#define Y_ENDSTOP 10  /* Y axis endstop input pin */
#define Z_ENDSTOP 11  /* Z axis endstop input pin */
#define ABORT A0  /* Abort input pin */
#define HOLD A1   /* Hold input pin */
#define RESUME A2 /* Resume input pin */

#define COMMAND_SIZE 128
char command[COMMAND_SIZE];
byte serial_count = 0;
boolean bytes_received = false;

void setup()
{
    /* Configure the steper drive pins as outputs */ 
  pinMode(EN, OUTPUT); 
  pinMode(X_DIR, OUTPUT); 
  pinMode(X_STEP, OUTPUT); 
  pinMode(Y_DIR, OUTPUT); 
  pinMode(Y_STEP, OUTPUT); 
  pinMode(Z_DIR, OUTPUT); 
  pinMode(Z_STEP, OUTPUT); 
  pinMode(A_DIR, OUTPUT); 
  pinMode(A_STEP, OUTPUT); 
  
  /* Configure the control pins as inputs with pullups */
  pinMode(X_ENDSTOP, INPUT_PULLUP);
  pinMode(Y_ENDSTOP, INPUT_PULLUP);
  pinMode(Z_ENDSTOP, INPUT_PULLUP);

  pinMode(ABORT, INPUT_PULLUP);
  pinMode(HOLD, INPUT_PULLUP);
  pinMode(RESUME, INPUT_PULLUP);

  /* Enable the X, Y, Z & Aux stepper outputs */
  digitalWrite(EN, HIGH); //Low to enable
  Serial.begin(9600);
}

void loop()
{
  char c;
  if( Serial.available() > 0 )
  {
    c = Serial.read();
    if( c != '\n' )
    {
        command[serial_count] = c;
        serial_count++;
    }
    else
    {
        command[serial_count] = 0;
        serial_count++;
    }
    bytes_received = true;
  }
  if( bytes_received && ( c == '\n' ) )
  {
    doit( command );
    serial_count = 0;
    bytes_received = false;
  }
}

void doit( char *input )
{
  char* command = strtok( input, " \n" );
  int G_val = -1;
  long X_val = 0;
  long Y_val = 0;
  long Z_val = 0;
  while( command != 0 )
  {
    switch( command[0] )
    {
      case 'G':
        G_val = atoi( command + 1 );
      break;
      case 'X':
        X_val = atol( command + 1 );
      break;
      case 'Y':
        Y_val = atol( command + 1 );
      break;
      case 'Z':
        Z_val = atol( command + 1 );
      break;
    }
    command = strtok(0, " \n" );
  }
  char mesg[32];
  sprintf( mesg, "%d %ld %ld %ld", G_val, X_val, Y_val, Z_val );
  Serial.println( mesg );
  digitalWrite(EN, LOW); //Low to enable
  motorizin( G_val, X_val, Y_val, Z_val );
  digitalWrite(EN, HIGH); //Low to enable
}

//------------------------------------------------------------------------------
void motorizin( int G_val, long X_val, long Y_val, long Z_val )
{
  switch( G_val )
  {
    case 0:
    case 1:
      line( X_val, Y_val, Z_val );
    break;
  }
}

//------------------------------------------------------------------------------
//first version of this program - only relative moves supported
//------------------------------------------------------------------------------
void line( long dx, long dy, long dz )
{
  digitalWrite( X_DIR, dx >= 0?LOW:HIGH );  
  digitalWrite( Y_DIR, dy >= 0?LOW:HIGH ); 
  long adx = abs( dx );
  long ady = abs( dy );
  if( adx >= ady ) 
  {
    subplot( 0, adx, ady );
  }
  else
  {
    subplot( 1, ady, adx );
  }
}

//------------------------------------------------------------------------------
void subplot( bool horizVert, long dx, long dy )
{  
  long D = 2 * dy - dx;
  bool stepy = false;
  for( long x = 1; x <= dx; x++ )
  {
    if( D > 0 )
    {
       stepy = true;
       D = D - dx;
    }
    D = D + dy;
    if( horizVert == 0 )
    {
      subsubplot( true, stepy );
  }
  else
  {
      subsubplot( stepy, true );
  }    
  stepy = false;
  }
}

//------------------------------------------------------------------------------
void subsubplot( long stepx, long stepy )
{
  if( stepx > 0 )
  {
    digitalWrite( X_STEP, HIGH ); 
  }
  if( stepy > 0 )
  {
    digitalWrite( Y_STEP, HIGH ); 
  }
  delay( 10 );
  digitalWrite( X_STEP, LOW );  
  digitalWrite( Y_STEP, LOW ); 
}

Tuesday, January 17, 2017

Bresenham Line ( relative ) implementation
written/tested in Daz Studio but will be in an arduino Leonardo plotter firmware


var dirx;
var diry;

plot( -10, -3 );

function plot( dx, dy ) // this is a relative movement
{
  dirx = ( dx >= 0 );
  diry = ( dy >= 0 );
  var adx = Math.abs( dx );
  var ady = Math.abs( dy );
  if( adx >= ady ) 
  {
   subplot( 0, adx, ady );
  }
  else
  {
   subplot( 1, ady, adx );
  }
}

function subplot( horizVert, dx, dy )
{  
  var D = 2 * dy - dx;
  var stepy = 0;
  for( x = 1; x <= dx; x++ )
  {
    if( D > 0 )
    {
       stepy = 1;
       D = D - dx;
 }
    D = D + dy;
   if( horizVert == 0 )
   {
     subsubplot( 1, stepy );
 }
 else
 {
     subsubplot( stepy, 1 );
 }    
 stepy = 0;
  }
}

function subsubplot( stepx, stepy )
{
 debug( stepx, stepy );
}

Bresenham Line implementation in Daz Studio Script

// DAZ Studio version 4.9.2.70 filetype DAZ Script

//plot( 3, 10 );
//plot( 10, 3 );
//plot( -3, 10 );
//plot( 10, -3 );
//plot( 3, -10 );
//plot( -10, 3 );
//plot( -3, -10 );
plot( -10, -3 );

var dirx;
var diry;

function plot( dx, dy )
{
  var adx = Math.abs( dx );
  var ady = Math.abs( dy );
  dirx = dx >= 0; //replace by set motor x direction
  diry = dy >= 0; //replace by set motor x direction
  if( adx >= ady ) 
  {
   subplot( 0, adx, ady );
  }
  else
  {
   subplot( 1, ady, adx );
  }
}

function subplot( horizVert, dx, dy )
{  
  var D = 2 * dy - dx;
  var y = 0;
  for( x = 0; x <= dx; x++ )
  {
   if( horizVert == 0 )
   {
     subsubplot( x, y );
   }
   else
   {
     subsubplot( y, x );
   }
   if( D > 0 )
   {
     y = y + 1
     D = D - dx
   }
  D = D + dy
  }
}

function subsubplot( x, y )
{
 debug( !dirx ? -x : x, !diry ? -y : y ); 
}



New ideas for an A.I.

Stupid idea that gave rise to this

while trying to adapt the Bresenham line algorithm for the firmware of a CNC machine ( 2 stepper motors ) i was thinking "it would be a good thing to really understand the principle Bresenham founded this on. I think it's called "accumulation of error"

https://en.wikipedia.org/wiki/Bresenham's_line_algorithm

i would add that the "auto-correction" phase is the other important cog of that machine

-----

then i thought, simple machinations of this type could be the base for my A.I.

-----

then i thought in a way solving a problem is just re-stating the question, or rather the object at the center of this question. Among the infinity(?) of ways to restate it, we want to pick the re-stating that satisfies the questioner ... or ... pick the restating that


for example, the question "what is the universe?"

we determine the object of the question is "the universe"

we presume there are many ways to restate "the universe"

- something big
- something we live in
- something humans think it is
- something infinite
- something eternal
- something finite

then it gets complicated

----

since i want to stay on simplicity-road  ...

----

 i'll just keep in mind the idea of  re-stating ( paraphrasing, tautologies, vérité de la Palice  etc )


Sunday, January 1, 2017

I may try to invent an A.I. in 2017

so i'll post about this sometimes

--------
mcjAI - 00000

Maybe an important step in inventing an A.I. is to name it properly, for instance, the A.I. that strive to pass the Turing test could be named Fake-Human-Dialog'ers. Personally, the A.I. that interests me more is an Artist Intelligence, a machine that produces interesting art, like drawings, songs and storytellings. Not necessarily a machine that answers questions or invents other machines. But but but, maybe if i can devise an art-generating machine then, as an aside, it will be able to (understand and ) answer questions and invent machines.
----------

here's a test on a daz script that uses a dictionary to identify nouns and adjectives in a given text, then replaces them by other nouns and verbs



the original text was a translation of Nostradamus quatrains

------------
from vertiginous deepest matriarch of visceral europe man
a class-conscious houri will be heroic to helical people mesosphere
who will by his airstrip seduce a calculating multitude, likeness
his fat-head will increase in numerate generality of recent east"
------------------------------------
beasts subliminal with whig will nymph oxonian rivers,
heart-rending greater harpoon of woolly posterity will be against hister.
into a fighter of orang-utan will corny stagnant silky be drawn,
when wall-to-wall fen of germany observes nothing.
------------------------------------
in preshrunk gland very not far from venus,
unreadable questionable greatest ones of asia and of africa:
they are said to have come from soppy rhine and from molecule
hister cries, tears at malta and puffy ligurian sea-coast.
------------------------------------
amperage will not be regained,
it will be occupied by a black, proud, stocky and get-at-able man:
when lewd intervention of prone lumber-room is opened,
serried leakage of venice will be inviolate by hister.
------------------------------------
hardy scrappy and linnaean armed over-sensitive will sol-fa historic decorative furnace,
operatic unwitting chosen one, collective captives not returning
prenatal world's treacherous crime, delectable aperient politic irale - israel - not at ease,
barb, hister, malta, and unborn remittent unreflecting does not return.
Happy New Year !