CUDA Cores: These are the primary processing units of the GPU. Higher CUDA core counts generally translate to better parallel processing performance.
Tensor Cores: Specialized cores designed specifically for deep learning tasks, such as matrix multiplications, which are crucial for neural network operations.
VRAM (Video RAM): This is the memory available to the GPU for storing data and models. More VRAM allows for handling larger models and datasets efficiently.
Clock Frequency: Represents the speed at which the GPU operates, measured in MHz. Higher frequencies generally lead to better performance.
Price: The cost of the GPU is a crucial factor, especially for businesses or research labs with budget constraints. It’s essential to balance performance needs with affordability.
NVIDIA GPUs based on their suitability for LLM inference
There are many questions about how to use the color orange with CTA buttons. When I worked at Eco/Gazelle, our brand color was orange, and we dealt with orange every day.
Can you guess which one is accessible
Colorable, calculates the contrast ratio between colors and ensures they meet Web Content Accessibility Guidelines (WCAG), an established range of recommendations for making web content more accessible. AA compliance, the most common, requires a minimum contrast ratio of 4.5 or 3 for large text, while AAA compliance, the more stringent or rare, requires a minimum contrast ratio of 7 or 4.5 for large text. Large text for AA and AAA is 18.66px minimum.
The white color seemed much clearer to me, but it didn’t meet the AA accessibility standards for my 14px button text. I couldn’t change the background color because orange was a branded color used in app and websites. Using any other color would make it feel disconnected. Although white stood out more, technically, black was the more accessible alternative, but it also felt like Halloween. I would lose that modern, non-digital connection.
Squint Test
I initially used the squint test because it was the simplest and quickest way to determine if this issue was worth investigating. It’s a commonly used technique that can be employed by anyone. By squinting, you can easily identify which elements stand out on the page, such as the prominence of CTAs in this case. Squinting serves as a natural contrast checker, but it lacks scientific backing. It wouldn’t meet accessibility standards in a formal setting.
Conclusion: The white text button appears to be the winner, but my testing was not based on any scientific evidence. I am not colorblind, so there are significant gaps in my understanding. Let’s consider asking colorblind users for further insight..
Color Blind Simulator
There are numerous tools available for calculating accessibility and simulating color blindness. While using these tools may take a little longer than the quick squint test, it is still easy and fast to use them for making accessible design decisions. I incorporated a couple of tools to enhance my research beyond just relying on the squint test.
Conclusion: I received conflicting results from different tools. When I used Stark to simulate a squint test for different types of color-blind users, the white text button appeared most clearly. However, Colorable indicated that the black text button was favored by a wide margin. To determine whether the issue was with the tools or some other variable, I needed to consider the human factor.
User Testing with Color Blind Participants
Since I am not colorblind, I needed to survey actual colorblind users. I asked three questions to a sample set of about 20 colorblind colleagues.
What type of color blindness do you have?
Which option is easier to read?
Why?
Q1: What Type of Color Blindness Do You Have?
The majority of my users had deuteranopia or a deuteranomaly (no green cones or a limited number of green cones). This is the most common type of color blindness. The second most common type of color blindness is protanopia; deuteranopia and protanopia combined affect 8% of men and 0.5% of women. Since the data consistently reflects what is happening in the world, I felt like our colleagues were a good mix of subjects.
Q2: Which Option is Easier to Read?
Out of everyone surveyed, 61% of users preferred the white text button. Even color blind users thought the white text button was more legible. I was curious how the other 39% landed on the black text button, so I looked at answers one and two to see how different types of color blindness affected the second answer.
The results showed that there is a clear preference for certain text colors depending on the type of color blindness a person has. Among protanopia/protanomaly color-blind users, 71% favored white text, while users with deuteranopia/deuteranomaly were evenly split at 50/50. The only user with tritanopia/tritanomaly preferred white text, and the one user with monochrome/achromatopsia favored black text.
It’s important to remember that our color tools and mathematical analyses may not account for all user experiences, especially when it comes to color blindness. In design, it’s crucial to empathize with all users and ensure that the design journey meets both brand and user goals. However, it can be challenging because no matter what option we choose, it may override someone’s color blind preferences.
Q3: Why is That Option Easier to Read?
“Whatever colour this is (I don’t really know lol) this is easier for me to read with the white text.”
– Deuteranopia/Deuteranomaly, White Text Button
“Difference between the two is relatively small, but definitely more contrast between white and surrounding color than I see with the black text.”
– Protanopia/Protanomaly, White Text Button
“Black is more easily identifiable (and faster) — the white falls into the background.”
– Deuteranopia/Deuteranomaly, Black Text Button
“The black blends together with the orange.”
– Tritanopia/Tritanomaly, White Text Button
A few responses stood out — they talked about accessibility problems like buzzing on the screen, headaches, and white on dark text:
“I honestly don’t have trouble reading either of these but the white text just seems slightly easier on my eyes.”
– Protanopia/Protanomaly, White Text Button
“I can read the black text just fine, but it makes my head hurt to look at it for a long time.”
– Unsure, White Text Button
“Not sure. Neither one is difficult to read. The white text is slightly easier on my eyes. The black text with the orange background has a slight halo effect around it. The white is easier to track as I scroll.”
– Protanopia/Protanomaly, White Text Button
Conclusion: The data set of users preferred the white text button over the black text button, primarily because of contrast. But different types of color blindness produced different results. Specifically, the monochrome/achromatopsia user preferred black text. I also uncovered some interesting legibility concerns. Some users had unique issues reading the black text, saying it caused buzzing and headaches. Even though the black text is the accessible option by WCAG standards, it fails to account for this particular level of accessibility.
Since the math is what dictates how the law decides if a site is accessible, it is critical to design based on math. However, the math that I researched has lost me in their equations and standards of color contrast. I would like to believe that there is an outlier, especially in the color orange that causes these digits to be off. Further research is needed to help determine why the white text button was preferred. If you’re hoping for a clear answer on our orange black/white challenge, unfortunately, I don’t have a great resolution here.
# run a function to get the list of countries Open.Trends has listed on their site
countries = getCountries()
# initialize a dictionary to store the information
d = {
'country':[],
'website':[],
'visits':[]
}
# iterate through that list
for country in countries:
# follow semrush's URL formatting and plug in the country using a formatted string
url = f'https://www.semrush.com/trending-websites/{country}/all'
# navigate to the URL using Selenium Webdriver
driver.get(url)
# feed the page information into BeautifulSoup
soup = BeautifulSoup(driver.page_source, 'html.parser')
# extract the table data using BeautifulSoup
results = getTableData(soup)
# feed the results into the dictionary
d['country'] = results['country']
d['website'] = results['website']
d['visits'] = results['visits']
# save this into some sort of file
df = pandas.DataFrame(d)
df.save_csv('popular_websites.csv', index=False)
# iterate through all the websites we found
for i in range(len(df['website'])):
# select the website
url = df.loc[i,'website']
# call the API on the website
category = getCategory(url)
# save the results
df.loc[i,'category'] = category
# filter out all the undesireable categories
undesireable = [...]
df = df.loc[df['category'] in undesireable]
# save this dataframe to avoid needing to do this all over again
df.save_csv('popular_websites_filtered.csv', index=False)
def acceptCookies(...):
# this function will probably consistent of a bunch of try-exception blocks
# in search of a button that says accept/agree/allow cookies in every language
# ngl i gave up like 1/3 of the way through
def notBot(...):
# some websites will present a captcha before giving you access
# there are ways to beat that captcha
# i didn't even try but you should
# iterate through websites
for i in range(len(df['website'])):
url = df.loc[i,'website]
driver.get(url)
# wait for the page to load
# you shouldn't really use static sleep calls but i did
sleep(5)
notBot(driver)
sleep(2)
acceptCoookies(driver)
sleep(2)
# take screenshots
driver.save_screenshot(f'homepage_{country.upper()}_{url}.png')
# this call only exists for firefox webdrivers
driver.save_full_page_screenshot(f'homepage_{country.upper()}_{url}.png')
class ImageFolderWithPaths(datasets.ImageFolder):
"""Custom dataset that includes image file paths. Extends
torchvision.datasets.ImageFolder
"""
# override the __getitem__ method. this is the method that dataloader calls
def __getitem__(self, index):
# this is what ImageFolder normally returns
original_tuple = super(ImageFolderWithPaths, self).__getitem__(index)
# the image file path
path = self.imgs[index][0]
# make a new tuple that includes original and the path
tuple_with_path = (original_tuple + (path,))
return tuple_with_path
# identify the path containing all your images
# if you want them to be labeled by country, you will need to sort them into folders
root_path = '...'
# transform the data so they are identical shapes
transform = transforms.Compose([transforms.Resize((255, 255)),
transforms.ToTensor()])
dataset = ImageFolderWithPaths(root, transform=transform)
# load the data
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
# initialize model
model = ResNet101(pretrained=True)
model.eval()
model.to(device)
# initialize variables to store results
features = None
labels = []
image_paths = []
# run the model
for batch in tqdm(dataloader, desc='Running the model inference'):
images = batch[0].to('cpu')
labels += batch[1]
image_paths += batch[2]
output = model.forward(images)
# convert from tensor to numpy array
current_features = output.detach().numpy()
if features is not None:
features = np.concatenate((features, current_features))
else:
features = current_features
# return labels too their string interpretations
labels = [dataset.classes[e] for e in labels]
# save the data
np.save('images.npy', images)
np.save('features.npy', features)
with open('labels.pkl', 'wb') as f:
pickle.dump(labels, f)
with open('image_paths.pkl', 'wb') as f:
pickle.dump(image_paths, f)
# the s in t-SNE stands for stochastic (random)
# let's set a seed for reproducible results
seed = 10
random.seed(seed)
torch.manual_seed(seed)
np.random.seed(seed)
# run tsne
n_components = 2
tsne = TSNE(n_components)
tsne_result = tsne.fit_transform(features)
# scale and move the coordinates so they fit [0; 1] range
tx = scale_to_01_range(tsne_result[:,0])
ty = scale_to_01_range(tsne_result[:,1)
# plot the images
for image_path, image, x, y in zip(image_paths, images, tx, ty):
# read the image
image = cv2.imread(image_path)
# resize the image
image = cv2.resize(image, (150,100))
# compute the dimensions of the image based on its tsne co-ordinates
tlx, tly, brx, bry = compute_plot_coordinates(image, x, y)
# put the image to its t-SNE coordinates using numpy sub-array indices
tsne_plot[tl_y:br_y, tl_x:br_x, :] = image
cv2.imshow('t-SNE', tsne_plot)
cv2.waitKey()
analysis_data = # import data
# initialize a list to capture a parallel set of labels
# so instead of the country, we can label our data through writing system, etc.
new_labels = []
# iterate through our pre-existing labels and use it to inform our new_labels
for label in labels:
# select the new_label based on the old label (the country name)
new_label = analysis_data['country' == label]
new_labels.append(new_label)
# use the new_labels to colour a scatterplot with our tsne_results
tsne_df = pd.DataFrame({'tsne_1': tx, 'tsne_2': ty, 'label': new_labels})
sns.scatterplot(x='tsne_1', y='tsne_2', data=tsne_df, hue='label')
AI can analyze this?
What is culture? Characteristics, Values, Knowledge, and Lifestyle.
Smart Phone Penetration Rate. South Korea 73%, USA, Canada 56%, China 47%, Japan 25% in 2013.
How Brain Process Information in Western vs Eastern Culture.- Cultural Psychology.
Western Culture process information Analytic Thinker – Focuses on individual objects and details. Specific details and attributes about these single objects.
Eastern Culture process information Holistic Thinker – Sees the picture as a while, focusing on the relationships between objects.
Holistic thinkers, East Asians, tend to see the bigger picture, while Western culture tends to see the same picture as individual components.
The Japanese pick up 60% more information about the context and twice as much information about relationships.
East Asians are constantly surrounded by information-rich products and dense physical environments, such as growing up in cities like Tokyo. Being surrounded by these things further pushes East Asians to normalize constantly processing a lot of information all at once.
This enables designers to cram a lot of information onto one single web page, and it leads to websites like Yahoo Japan. All the information is available without breaking them up, and users are expected to find what they need easily.
Westerners who access these types of websites tend to be overwhelmed by the amount of information that’s presented all at once. However, East Asians are actually used to this density.
Another study on Europeans, Canadians, and East Asians examined how fast they can find relevant information on complex and dense websites. They found that East Asians, in general, were faster at finding an article on the complex and lengthy website that the researcher created.
This is something that’s been built into their brains because of the influence of culture and because this slight difference in a cognitive process changes the way that the products they use are designed.
Just because something isn’t up to our standards, Is it bad?
If you can get users who struggle with focus to onboard and sustain interest in your product, just think what you can do for everyone else.
Designing for users with ADHD
In clear terms and their own language, ask the user upfront what their goal is. To give them a focused experience, it’s vital that you know what they are looking for. This is also a great opportunity to build your product strategy by getting data on which choice users prefer. You’ll find some great examples of this in fitness and wellbeing apps.
Help the user stick to this goal — be disciplined about reducing options. Give them one thing to handle at a time. Use pagination rather than infinite scroll. Offer upsell options only when you’re sure central tasks have been completed.
Include users with ADHD in your research processes. If possible, observe in context — it’s one thing to complete a task in a dedicated session; another when you have a million other distractions to hyperfocus on. Also, probably best to avoid diary studies. They are not likely to be completed 🙂
Reduce anxiety. In particular, avoid urgency signalling (“last chance” messaging, overuse of notifications, etc.). The Humane Design Guide has some great examples of how to do this.
Give encouragement. Use rewarding elements like a checklist or a progress bar to show users visually how they are on track to reaching their goal.
Don’t ask users to remember important information across platforms. This strains short-term memory. Allow users to log in with existing services (Google, Apple…). Provide a record of anything important in context (i.e. inside your product).
Software development and consumer preferences that your users may not mention during informal user research conversations, even though these are critical.
Websites and apps should be fast and performant.
Websites and apps should respond to user input.
Websites and apps should be clear and easy-to-use.
Websites and apps shouldn’t have bugs.
Websites and apps should work on all screen sizes.
Users may not have expertise in software engineering, product management, or product design, so it’s unfair to expect them to understand UI/UX.
Do you think users are able to accurately predict their future behavior when it comes to a software product’s “great new design”?
“Traditional consumer research is just as likely to unearth falsehoods as it is truths. In fact, behavioral science has proven just how bad humans are at understanding why we do what we do, and has shown that most of the time consumers either don’t know what they want.” — Adam Cleaver on WRAC
“The overconfidence effect is a well-established bias in which a person’s subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high.[1][2]” — Wikipedia
Your users may not admit when they don’t know, so they often make uninformed guesses instead of asking questions.
“Myth: People can tell you what they want
Many organizations still rely on asking people what changes they’d like to see in their website or service, neglecting historical research failures like the New Coke or the Aeron chair.
When asking people, you have to be aware that people make confident but false predictions about their future behavior, especially when presented with a new and unfamiliar design.” — UXMyths.com
“When a company invites you privately to show you their ‘exciting, new design’ for their software product, do you think most people will say, ‘Yeah, this sucks? Stop changing things for no reason!’ No chance! However, when you’re a daily user of a software product, and they change the entire user experience for no reason, most people will say just that!”
But, when you’re a daily user of a software product, and they change how the entire user experience for no reason, most people will say just that!
“Some people say, “Give the customers what they want.” But that’s not my approach. Our job is to figure out what they’re going to want before they do. I think Henry Ford once said, “If I’d asked customers what they wanted, th
ey would have told me, ‘A faster horse!’” People don’t know what they want until you show it to them. That’s why I never rely on market research. Our task is to read things that are not yet on the page.” — Steve Jobs
Steve Jobs and Henry Ford both had it exactly right: consumers often don’t know what they want, at least until a new product becomes popular on social media due to its outstanding user experience.
Everyone tends to be overly confident in their own abilities, whether it’s to evaluate user research as stakeholders or to predict their own future behavior as users participating in user research. Relying solely on qualitative user interviews as your method of user research, without incorporating quantitative usability testing (observing and timing your users as they complete tasks in your product), can produce unreliable results.
Performing qualitative user interviews as your sole method of user research, without quantitative usability testing (observing and timing your users as they complete tasks in your product) is garbage in, garbage out.