16 Dec AI and machine learning is going to change entire future
At first, the whole idea of machine learning and artificial intelligence completely baffled me. I remember thinking, “How could these technologies possibly take over everything? Could they really be a threat to us?” It all seemed a bit too much.
Things started to click during an applied math class, That’snd I began to see how much machine learning could change our world.
Then came 2019, when the coronavirus began causing significant problems worldwide. Data scientists in Germany and several other countries stepped up. They collected samples from different places to analyze them. They tracked disease spread through data cleaning and probability assessments, a key method in machine learning. Their work was crucial in helping us understand how and where the virus was moving.
Such a small Display of a few lines can neatly represent several years of code-up data. If machine learning is used correctly, we can remove ourselves from many diseases in the past. We can also see how far we can go.
Machine learning is not only here; it has become so advanced that you can recognize what may happen on which day. Finally, all these things have to be found by probability. Yes, we cannot say it must happen on the same day. The data for the whole year will be extracted.
Through modeling and data science, we can identify diseases and determine how often natural disasters occur and in which regions. Thus, you can avoid the trouble we cause.
By using Data Manipulation using Pandas:
Let’s look at simple data in which we are told how many natural disasters will occur in Afghanistan in 2022. And in which area, how much was the loss of life? We will also determine the age of most of those who died there.
# Import necessary libraries
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
Inside Python, there are some libraries like pandas and so on. Which is to make your job easier; actually, the libraries are already set up on top of a mathematical calculation that doesn’t let you worry too much about the fact that you have to do a lot of mathematical calculations and replace some values. have to do.
print(df.head())
# Plotting the data
fig, axs = plt.subplots(2, 2, figsize=(12, 10))
# Plot 1: Persons killed and injured
axs[0, 0].bar(df['INC_TYPE'], df['Persons_killed'], color='red', alpha=0.7, label='Persons Killed')
axs[0, 0].bar(df['INC_TYPE'], df['Persons_injured'], color='blue', alpha=0.7, label='Persons Injured')
axs[0, 0].set_title('Persons Killed and Injured')
axs[0, 0].set_xlabel('Incident Type')
axs[0, 0].set_ylabel('Count')
axs[0, 0].legend()
# Plot 2: Families affected
axs[0, 1].bar(df['INC_TYPE'], df['Families_affected'], color='green', alpha=0.7)
axs[0, 1].set_title('Families Affected')
axs[0, 1].set_xlabel('Incident Type')
axs[0, 1].set_ylabel('Count')
# Plot 3: Individuals affected
axs[1, 0].bar(df['INC_TYPE'], df['Individuals_affected2'], color='orange', alpha=0.7)
axs[1, 0].set_title('Individuals Affected')
axs[1, 0].set_xlabel('Incident Type')
axs[1, 0].set_ylabel('Count')
# Plot 4: Houses damaged and destroyed
axs[1, 1].bar(df['INC_TYPE'], df['Houses_damaged'], color='purple', alpha=0.7, label='Houses Damaged')
axs[1, 1].bar(df['INC_TYPE'], df['Houses_destroyed'], color='pink', alpha=0.7, label='Houses Destroyed')
axs[1, 1].set_title('Houses Damaged and Destroyed')
axs[1, 1].set_xlabel('Incident Type')
axs[1, 1].set_ylabel('Count')
axs[1, 1].legend()
plt.tight_layout()
plt.show()
Implementation of machine learning model in PenTesting:
I need to work on machine learning to protect something amazing that helps me in my daily routine.
The result was quite impressive.
Machine learning and data science are incredible but can be frustrating. Imagine dealing with millions of data records—it’s a daunting task to scan through them all. And it’s not practical to keep buying different scripts and tools online whenever you face this challenge.
I’ve come across a few tools that have not only helped me manage large datasets but have also enabled me to generate significant revenue. This experience inspired me to write about various machine-learning models and scripts. These tools are designed to efficiently handle vast amounts of data without requiring new purchases.
I developed a web scraper and automation script for collecting data for colleagues. The script adjusts user agents and IP protocols to address the challenges of machine learning and artificial intelligence.
It extracts data from Google, saves it in a CSV file, and tests different user agents imported from the CSV file to identify accurate results. This process efficiently tested 28,000 proxies from PayPal.
import a pandas as a pd
There are also some other libraries and some cafes. I can use Numpy, but when we set these things up, we have to remove all the duplicates.
import pandas as pd
df = pd.read_csv(r"paypal.csv")
We are dealing with a dataset that includes “28,862 rows × 4 columns” of URLs—a significant amount to manage.
While you could use Google Sheets, Microsoft Excel, or other tools to handle this data, using the Pandas DataFrame library in Python can save you a lot of time.
The ability to drop columns or perform other data manipulations directly with Pandas means you don’t have to rely heavily on external applications. This not only keeps your processes within your resources but also enhances efficiency. That’s why we recommend using tools that align with your logic and streamline your workflow.
df = df.drop_duplicates()
df
for x in df.index:
if df.loc[x , 'Flags' ] == 'Out of Scope':
df.drop(x , inplace=True )
df
After dropping many roses, I got clean data that was not as clean as I had thought but Quite better. first, I removed all the duplicates; then, I decided to remove the scope Domains
20230 rows × 4 columns
df = df.reset_index(drop=True)
df
Filtering a significant amount of data of subdomains to find the possibility of submitting a takeover.
I have already written a couple of articles about finding the possible subdomain. If you haven’t checked them out, please do so so you can. In this case, I am going to use up the existing record.
I already have the record, which has a lot of junk, and I will probably remove it. Better.
To be Content
No Comments