Índice do Conteúdo
Testing e QA em Jogos: Processo Completo de Garantia de Qualidade
Introdução: A Importância do QA em Jogos
Quality Assurance (QA) é a diferença entre um lançamento desastroso e um sucesso estrondoso. Em um mercado onde jogadores esperam experiências polidas desde o dia um, QA eficaz não é opcional - é essencial. Este guia abrangente explorará metodologias de teste, automação, ferramentas e estratégias para garantir que seu jogo seja lançado com a máxima qualidade possível.
O Custo dos Bugs
Um único bug game-breaking pode destruir anos de trabalho duro. Reviews negativos, refunds em massa e danos permanentes à reputação são consequências reais de QA inadequado. Investir em testing robusto é investir no sucesso do seu jogo.
Fundamentos de Game Testing
Tipos de Testes
# Framework de categorização de testes para jogos
class GameTestingFramework:
def __init__(self):
self.test_categories = {
"functional": {
"description": "Verifica se features funcionam como esperado",
"priority": "Critical",
"examples": [
"Menus navegáveis",
"Controles responsivos",
"Save/Load funcional",
"Progressão correta"
],
"automation_potential": "High"
},
"compatibility": {
"description": "Testa em diferentes plataformas/hardware",
"priority": "High",
"examples": [
"Diferentes GPUs",
"Múltiplas resoluções",
"Vários controles",
"Sistemas operacionais"
],
"automation_potential": "Medium"
},
"performance": {
"description": "Avalia FPS, loading times, memória",
"priority": "High",
"examples": [
"Frame rate estável",
"Tempo de loading",
"Uso de memória",
"Thermal throttling"
],
"automation_potential": "High"
},
"gameplay": {
"description": "Testa mecânicas, balanceamento, diversão",
"priority": "Critical",
"examples": [
"Dificuldade apropriada",
"Progressão satisfatória",
"Mecânicas intuitivas",
"Loop viciante"
],
"automation_potential": "Low"
},
"localization": {
"description": "Verifica traduções e adaptações culturais",
"priority": "Medium",
"examples": [
"Textos traduzidos",
"Audio localizado",
"Formatação de data/moeda",
"Conteúdo culturalmente apropriado"
],
"automation_potential": "Medium"
},
"compliance": {
"description": "Conformidade com padrões de plataforma",
"priority": "Critical",
"examples": [
"ESRB/PEGI ratings",
"Platform TCRs",
"Accessibility standards",
"Legal requirements"
],
"automation_potential": "Medium"
},
"multiplayer": {
"description": "Testa funcionalidades online",
"priority": "High",
"examples": [
"Matchmaking",
"Lag compensation",
"Sincronização",
"Cheating prevention"
],
"automation_potential": "Medium"
},
"security": {
"description": "Identifica vulnerabilidades",
"priority": "High",
"examples": [
"Injection attacks",
"Memory exploits",
"Piracy protection",
"Data protection"
],
"automation_potential": "High"
}
}
def create_test_plan(self, game_type, platform, scope):
"""Criar plano de testes personalizado"""
test_plan = {
"game_info": {
"type": game_type,
"platform": platform,
"scope": scope
},
"test_phases": [],
"resources_needed": {},
"timeline": {}
}
# Fases de teste baseadas no desenvolvimento
phases = [
{
"phase": "Alpha",
"focus": ["functional", "gameplay"],
"coverage": 60,
"duration_weeks": 4
},
{
"phase": "Beta",
"focus": ["functional", "performance", "multiplayer"],
"coverage": 80,
"duration_weeks": 6
},
{
"phase": "Release Candidate",
"focus": ["all"],
"coverage": 95,
"duration_weeks": 2
},
{
"phase": "Gold",
"focus": ["critical_path", "compliance"],
"coverage": 100,
"duration_weeks": 1
}
]
for phase in phases:
test_phase = self.generate_phase_tests(phase, game_type)
test_plan["test_phases"].append(test_phase)
return test_plan
def generate_phase_tests(self, phase, game_type):
"""Gerar testes específicos para cada fase"""
test_cases = []
for focus_area in phase["focus"]:
if focus_area in self.test_categories:
category = self.test_categories[focus_area]
for example in category["examples"]:
test_case = {
"id": self.generate_test_id(),
"category": focus_area,
"description": example,
"priority": category["priority"],
"automated": category["automation_potential"] == "High",
"phase": phase["phase"]
}
test_cases.append(test_case)
return {
"phase": phase["phase"],
"test_cases": test_cases,
"total_cases": len(test_cases),
"estimated_hours": len(test_cases) * 0.5 # 30min por caso
}
Metodologia de Testes
using UnityEngine;
using System.Collections.Generic;
using System.Diagnostics;
public class TestMethodology : MonoBehaviour
{
// Sistema estruturado de metodologia de testes
public class TestingMethodology
{
public enum TestingApproach
{
BlackBox, // Sem conhecimento do código
WhiteBox, // Com acesso ao código
GrayBox, // Conhecimento parcial
Exploratory, // Teste livre sem script
Regression, // Re-teste após mudanças
Smoke, // Teste básico rápido
Sanity // Verificação específica
}
[System.Serializable]
public class TestCase
{
public string testCaseId;
public string title;
public string description;
public string preconditions;
public List<TestStep> steps;
public string expectedResult;
public string actualResult;
public TestStatus status;
public TestPriority priority;
public float executionTime;
public string assignedTester;
public List<string> attachments;
}
[System.Serializable]
public class TestStep
{
public int stepNumber;
public string action;
public string expectedOutcome;
public string actualOutcome;
public bool passed;
}
public enum TestStatus
{
NotExecuted,
Passed,
Failed,
Blocked,
Skipped,
InProgress
}
public enum TestPriority
{
Critical, // Game breaking
High, // Major feature broken
Medium, // Minor feature broken
Low // Cosmetic issue
}
public class TestExecutor
{
private List<TestCase> testCases = new List<TestCase>();
private TestResults currentResults;
public void ExecuteTestSuite(List<TestCase> suite)
{
currentResults = new TestResults();
currentResults.StartTime = System.DateTime.Now;
foreach (var testCase in suite)
{
ExecuteTestCase(testCase);
}
currentResults.EndTime = System.DateTime.Now;
GenerateReport(currentResults);
}
void ExecuteTestCase(TestCase testCase)
{
UnityEngine.Debug.Log($"Executing Test: {testCase.title}");
// Verificar pré-condições
if (!CheckPreconditions(testCase.preconditions))
{
testCase.status = TestStatus.Blocked;
return;
}
// Executar cada passo
foreach (var step in testCase.steps)
{
bool stepPassed = ExecuteStep(step);
if (!stepPassed)
{
testCase.status = TestStatus.Failed;
LogFailure(testCase, step);
break;
}
}
if (testCase.status != TestStatus.Failed)
{
testCase.status = TestStatus.Passed;
}
// Registrar resultado
currentResults.AddResult(testCase);
}
bool ExecuteStep(TestStep step)
{
try
{
// Executar ação
PerformAction(step.action);
// Verificar resultado
string outcome = GetActualOutcome();
step.actualOutcome = outcome;
step.passed = (outcome == step.expectedOutcome);
return step.passed;
}
catch (System.Exception e)
{
UnityEngine.Debug.LogError($"Test step failed: {e.Message}");
step.passed = false;
step.actualOutcome = e.Message;
return false;
}
}
void LogFailure(TestCase testCase, TestStep failedStep)
{
var bugReport = new BugReport
{
TestCaseId = testCase.testCaseId,
Title = $"Test Failed: {testCase.title}",
Description = $"Step {failedStep.stepNumber} failed\n" +
$"Expected: {failedStep.expectedOutcome}\n" +
$"Actual: {failedStep.actualOutcome}",
Severity = MapPriorityToSeverity(testCase.priority),
ReproductionSteps = ConvertStepsToRepro(testCase.steps),
Screenshot = CaptureScreenshot(),
SystemInfo = GatherSystemInfo()
};
BugTracker.Instance.ReportBug(bugReport);
}
}
public class TestResults
{
public DateTime StartTime { get; set; }
public DateTime EndTime { get; set; }
public int TotalTests { get; set; }
public int PassedTests { get; set; }
public int FailedTests { get; set; }
public int BlockedTests { get; set; }
public float PassRate => (float)PassedTests / TotalTests * 100;
public List<TestCase> FailedCases { get; set; }
}
}
}
Automação de Testes
Framework de Automação
// Framework de automação de testes para jogos
class GameTestAutomation {
constructor() {
this.testRunner = new TestRunner();
this.testRecorder = new TestRecorder();
this.testValidator = new TestValidator();
}
// Sistema de gravação e replay
class TestRecorder {
constructor() {
this.recording = false;
this.events = [];
this.startTime = 0;
}
startRecording() {
this.recording = true;
this.events = [];
this.startTime = Date.now();
// Capturar eventos
this.captureInputEvents();
this.captureGameStateChanges();
this.captureNetworkEvents();
}
captureInputEvents() {
// Mouse
document.addEventListener('click', (e) => {
if (this.recording) {
this.events.push({
type: 'click',
timestamp: Date.now() - this.startTime,
x: e.clientX,
y: e.clientY,
target: e.target.id
});
}
});
// Keyboard
document.addEventListener('keydown', (e) => {
if (this.recording) {
this.events.push({
type: 'keydown',
timestamp: Date.now() - this.startTime,
key: e.key,
code: e.code
});
}
});
// Gamepad
window.addEventListener('gamepadconnected', (e) => {
const pollGamepad = () => {
if (!this.recording) return;
const gamepad = navigator.getGamepads()[e.gamepad.index];
gamepad.buttons.forEach((button, index) => {
if (button.pressed) {
this.events.push({
type: 'gamepad_button',
timestamp: Date.now() - this.startTime,
button: index,
value: button.value
});
}
});
requestAnimationFrame(pollGamepad);
};
pollGamepad();
});
}
stopRecording() {
this.recording = false;
return this.generateTestScript();
}
generateTestScript() {
const script = {
metadata: {
recordedAt: new Date().toISOString(),
duration: Date.now() - this.startTime,
eventCount: this.events.length
},
events: this.events,
assertions: this.generateAssertions()
};
return script;
}
generateAssertions() {
// Gerar asserções automáticas baseadas em estado
const assertions = [];
// Asserções de estado do jogo
assertions.push({
type: 'gameState',
check: 'playerHealth',
operator: '>',
value: 0,
message: 'Player should be alive'
});
// Asserções de UI
assertions.push({
type: 'ui',
check: 'elementVisible',
target: 'mainMenu',
value: true,
message: 'Main menu should be visible'
});
return assertions;
}
}
// Executor de testes automatizados
class TestRunner {
async runTestScript(script) {
const results = {
passed: 0,
failed: 0,
errors: [],
startTime: Date.now()
};
try {
// Setup
await this.setupTestEnvironment();
// Replay events
for (const event of script.events) {
await this.waitForTimestamp(event.timestamp);
await this.replayEvent(event);
}
// Validar asserções
for (const assertion of script.assertions) {
const result = await this.validateAssertion(assertion);
if (result.passed) {
results.passed++;
} else {
results.failed++;
results.errors.push(result.error);
}
}
} catch (error) {
results.errors.push({
type: 'execution_error',
message: error.message,
stack: error.stack
});
}
results.endTime = Date.now();
results.duration = results.endTime - results.startTime;
return results;
}
async replayEvent(event) {
switch (event.type) {
case 'click':
await this.simulateClick(event.x, event.y);
break;
case 'keydown':
await this.simulateKeyPress(event.key);
break;
case 'gamepad_button':
await this.simulateGamepadButton(event.button, event.value);
break;
}
}
async validateAssertion(assertion) {
try {
let actual;
switch (assertion.type) {
case 'gameState':
actual = await this.getGameState(assertion.check);
break;
case 'ui':
actual = await this.getUIState(assertion.check, assertion.target);
break;
case 'performance':
actual = await this.getPerformanceMetric(assertion.check);
break;
}
const passed = this.compareValues(actual, assertion.operator, assertion.value);
return {
passed,
error: passed ? null : {
assertion: assertion,
actual: actual,
expected: assertion.value,
message: assertion.message
}
};
} catch (error) {
return {
passed: false,
error: {
assertion: assertion,
message: error.message
}
};
}
}
compareValues(actual, operator, expected) {
switch (operator) {
case '==': return actual == expected;
case '===': return actual === expected;
case '>': return actual > expected;
case '<': return actual < expected;
case '>=': return actual >= expected;
case '<=': return actual <= expected;
case '!=': return actual != expected;
case 'contains': return actual.includes(expected);
case 'matches': return new RegExp(expected).test(actual);
default: return false;
}
}
}
}
Testes Unitários para Games
using NUnit.Framework;
using UnityEngine;
using UnityEngine.TestTools;
using System.Collections;
public class GameUnitTests
{
// Testes de Gameplay
[TestFixture]
public class PlayerTests
{
private GameObject playerObject;
private PlayerController player;
[SetUp]
public void Setup()
{
playerObject = new GameObject("Player");
player = playerObject.AddComponent<PlayerController>();
player.Initialize(100, 10); // health, damage
}
[Test]
public void Player_TakeDamage_ReducesHealth()
{
// Arrange
float initialHealth = player.Health;
float damageAmount = 20f;
// Act
player.TakeDamage(damageAmount);
// Assert
Assert.AreEqual(initialHealth - damageAmount, player.Health);
}
[Test]
public void Player_TakeLethalDamage_Dies()
{
// Arrange
float lethalDamage = player.Health + 10;
// Act
player.TakeDamage(lethalDamage);
// Assert
Assert.IsTrue(player.IsDead);
Assert.AreEqual(0, player.Health);
}
[Test]
public void Player_Heal_IncreasesHealth()
{
// Arrange
player.TakeDamage(50);
float healthBefore = player.Health;
float healAmount = 30;
// Act
player.Heal(healAmount);
// Assert
Assert.AreEqual(healthBefore + healAmount, player.Health);
}
[Test]
public void Player_HealBeyondMax_ClampsToMaxHealth()
{
// Act
player.Heal(999);
// Assert
Assert.AreEqual(player.MaxHealth, player.Health);
}
[UnityTest]
public IEnumerator Player_Jump_IncreasesYPosition()
{
// Arrange
player.gameObject.AddComponent<Rigidbody>();
float initialY = player.transform.position.y;
// Act
player.Jump();
yield return new WaitForSeconds(0.5f);
// Assert
Assert.Greater(player.transform.position.y, initialY);
}
[TearDown]
public void TearDown()
{
Object.DestroyImmediate(playerObject);
}
}
// Testes de Inventário
[TestFixture]
public class InventoryTests
{
private Inventory inventory;
[SetUp]
public void Setup()
{
inventory = new Inventory(10); // 10 slots
}
[Test]
public void Inventory_AddItem_IncreasesCount()
{
// Arrange
var item = new Item { Id = "sword", Name = "Iron Sword", Stackable = false };
// Act
bool added = inventory.AddItem(item);
// Assert
Assert.IsTrue(added);
Assert.AreEqual(1, inventory.ItemCount);
Assert.IsTrue(inventory.HasItem("sword"));
}
[Test]
public void Inventory_AddStackableItems_StacksProperly()
{
// Arrange
var potion = new Item { Id = "potion", Name = "Health Potion", Stackable = true };
// Act
inventory.AddItem(potion, 5);
inventory.AddItem(potion, 3);
// Assert
Assert.AreEqual(1, inventory.UsedSlots); // Should occupy one slot
Assert.AreEqual(8, inventory.GetItemQuantity("potion"));
}
[Test]
public void Inventory_Full_RejectsNewItems()
{
// Arrange
for (int i = 0; i < 10; i++)
{
inventory.AddItem(new Item { Id = $"item_{i}" });
}
// Act
bool added = inventory.AddItem(new Item { Id = "extra_item" });
// Assert
Assert.IsFalse(added);
Assert.IsFalse(inventory.HasItem("extra_item"));
}
[TestCase(5, 3, 2)]
[TestCase(10, 10, 0)]
[TestCase(1, 2, 0)]
public void Inventory_RemoveItem_UpdatesQuantity(int initial, int toRemove, int expected)
{
// Arrange
var item = new Item { Id = "arrow", Stackable = true };
inventory.AddItem(item, initial);
// Act
int removed = inventory.RemoveItem("arrow", toRemove);
// Assert
Assert.AreEqual(Math.Min(initial, toRemove), removed);
Assert.AreEqual(expected, inventory.GetItemQuantity("arrow"));
}
}
// Testes de Performance
[TestFixture]
public class PerformanceTests
{
[Test]
[Performance]
public void PathfindingPerformance_FindPath_Under5ms()
{
Measure.Method(() =>
{
var pathfinder = new AStarPathfinder();
var start = new Vector3(0, 0, 0);
var end = new Vector3(100, 0, 100);
pathfinder.FindPath(start, end);
})
.WarmupCount(10)
.MeasurementCount(100)
.IterationsPerMeasurement(1)
.GC()
.Run();
// Performance assertion
var results = PerformanceTest.Active;
Assert.Less(results.median, 5.0, "Pathfinding took more than 5ms");
}
[Test]
public void MemoryLeak_SpawningEnemies_NoLeaks()
{
// Arrange
long initialMemory = GC.GetTotalMemory(true);
var enemyPool = new ObjectPool<Enemy>(100);
// Act - Spawn and destroy many times
for (int i = 0; i < 1000; i++)
{
var enemy = enemyPool.Get();
enemy.Initialize();
enemyPool.Return(enemy);
}
// Force GC
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
long finalMemory = GC.GetTotalMemory(true);
// Assert - Memory shouldn't grow significantly
long memoryGrowth = finalMemory - initialMemory;
Assert.Less(memoryGrowth, 1024 * 1024, "Memory grew by more than 1MB");
}
}
}
Bug Tracking e Reporting
Sistema de Bug Tracking
# Sistema completo de tracking e gestão de bugs
class BugTrackingSystem:
def __init__(self):
self.bugs = []
self.bug_id_counter = 1000
class Bug:
def __init__(self, **kwargs):
self.id = kwargs.get('id')
self.title = kwargs.get('title')
self.description = kwargs.get('description')
self.severity = kwargs.get('severity', 'Medium')
self.priority = kwargs.get('priority', 'Medium')
self.status = kwargs.get('status', 'New')
self.category = kwargs.get('category', 'Gameplay')
self.platform = kwargs.get('platform', 'All')
self.version = kwargs.get('version')
self.reported_by = kwargs.get('reported_by')
self.assigned_to = kwargs.get('assigned_to')
self.reproduction_steps = kwargs.get('reproduction_steps', [])
self.expected_result = kwargs.get('expected_result')
self.actual_result = kwargs.get('actual_result')
self.attachments = kwargs.get('attachments', [])
self.created_date = kwargs.get('created_date')
self.modified_date = kwargs.get('modified_date')
self.resolution = kwargs.get('resolution')
self.comments = kwargs.get('comments', [])
self.duplicate_of = kwargs.get('duplicate_of')
self.regression = kwargs.get('regression', False)
def report_bug(self, bug_data):
"""Reportar novo bug com validação"""
# Validar dados obrigatórios
required_fields = ['title', 'description', 'reproduction_steps']
for field in required_fields:
if field not in bug_data:
raise ValueError(f"Campo obrigatório ausente: {field}")
# Verificar duplicatas
duplicate = self.check_for_duplicates(bug_data)
if duplicate:
return {
'status': 'duplicate',
'duplicate_id': duplicate.id,
'message': f"Bug parece ser duplicata de #{duplicate.id}"
}
# Criar novo bug
bug = self.Bug(
id=self.generate_bug_id(),
**bug_data,
created_date=datetime.now(),
modified_date=datetime.now()
)
# Calcular prioridade automaticamente
bug.priority = self.calculate_priority(bug)
# Adicionar ao sistema
self.bugs.append(bug)
# Notificar equipe relevante
self.notify_team(bug)
return {
'status': 'created',
'bug_id': bug.id,
'priority': bug.priority
}
def calculate_priority(self, bug):
"""Calcular prioridade baseado em múltiplos fatores"""
priority_matrix = {
('Critical', 'High'): 'P1',
('Critical', 'Medium'): 'P1',
('Critical', 'Low'): 'P2',
('Major', 'High'): 'P2',
('Major', 'Medium'): 'P3',
('Major', 'Low'): 'P4',
('Minor', 'High'): 'P3',
('Minor', 'Medium'): 'P4',
('Minor', 'Low'): 'P5',
('Trivial', 'High'): 'P4',
('Trivial', 'Medium'): 'P5',
('Trivial', 'Low'): 'P5'
}
# Fatores adicionais
factors = {
'affects_main_path': -1, # Aumenta prioridade
'workaround_exists': +1, # Diminui prioridade
'regression': -2, # Aumenta muito a prioridade
'edge_case': +1, # Diminui prioridade
'data_loss': -2, # Aumenta muito a prioridade
'cosmetic': +2 # Diminui muito a prioridade
}
base_priority = priority_matrix.get(
(bug.severity, self.calculate_frequency(bug)),
'P3'
)
# Ajustar baseado em fatores
priority_value = int(base_priority[1])
for factor, adjustment in factors.items():
if getattr(bug, factor, False):
priority_value += adjustment
# Clamp entre P1 e P5
priority_value = max(1, min(5, priority_value))
return f"P{priority_value}"
def check_for_duplicates(self, bug_data):
"""Verificar se bug já foi reportado"""
# Usar NLP simples para comparar similaridade
from difflib import SequenceMatcher
threshold = 0.8 # 80% similaridade
for existing_bug in self.bugs:
if existing_bug.status == 'Closed':
continue
# Comparar títulos
title_similarity = SequenceMatcher(
None,
bug_data['title'].lower(),
existing_bug.title.lower()
).ratio()
# Comparar descrições
desc_similarity = SequenceMatcher(
None,
bug_data['description'].lower(),
existing_bug.description.lower()
).ratio()
# Se muito similar, é provável duplicata
if title_similarity > threshold or desc_similarity > threshold:
return existing_bug
return None
def generate_test_matrix(self):
"""Gerar matriz de testes baseada em bugs"""
test_matrix = {
'platforms': set(),
'categories': set(),
'high_risk_areas': [],
'regression_tests': []
}
for bug in self.bugs:
test_matrix['platforms'].add(bug.platform)
test_matrix['categories'].add(bug.category)
# Identificar áreas de alto risco
if bug.severity in ['Critical', 'Major']:
test_matrix['high_risk_areas'].append({
'area': bug.category,
'reason': f"Bug #{bug.id}: {bug.title}",
'test_priority': 'High'
})
# Adicionar testes de regressão
if bug.status == 'Fixed':
test_matrix['regression_tests'].append({
'bug_id': bug.id,
'test_name': f"Regression_Bug_{bug.id}",
'steps': bug.reproduction_steps,
'expected': "Bug não deve reaparecer"
})
return test_matrix
Testes de Plataforma
Certificação de Console
public class PlatformCertificationTests
{
// Testes para certificação de plataformas
public class TCRCompliance // Technical Certification Requirements
{
[System.Serializable]
public class PlatformRequirement
{
public string requirementId;
public string description;
public string category;
public bool mandatory;
public string testMethod;
public string expectedResult;
}
// PlayStation Requirements
public class PlayStationTCR
{
public List<PlatformRequirement> requirements = new List<PlatformRequirement>
{
new PlatformRequirement
{
requirementId = "PS-001",
description = "Jogo não deve travar (crash)",
category = "Stability",
mandatory = true,
testMethod = "4 horas de gameplay contínuo",
expectedResult = "Zero crashes"
},
new PlatformRequirement
{
requirementId = "PS-002",
description = "Save data não deve corromper",
category = "Save System",
mandatory = true,
testMethod = "Interromper save em vários pontos",
expectedResult = "Save recuperável ou erro gracioso"
},
new PlatformRequirement
{
requirementId = "PS-003",
description = "Suporte a suspend/resume",
category = "System Features",
mandatory = true,
testMethod = "Suspender em qualquer momento",
expectedResult = "Resume perfeito"
}
};
public void RunComplianceTests()
{
foreach (var req in requirements)
{
TestRequirement(req);
}
}
}
// Xbox Requirements
public class XboxTCR
{
public void TestXboxLiveIntegration()
{
// Gamertag display
Assert.IsTrue(IsGamertagVisible(), "Gamertag must be visible");
// Achievements
Assert.IsTrue(AchievementsWork(), "Achievements must unlock properly");
// Cloud saves
Assert.IsTrue(CloudSavesSync(), "Cloud saves must sync");
// Multiplayer
if (HasMultiplayer())
{
Assert.IsTrue(MultiplayerConnects(), "Multiplayer must work");
}
}
}
// Nintendo Requirements
public class NintendoLotCheck
{
public void TestSwitchSpecific()
{
// Docked/Handheld transition
TestDockedHandheldTransition();
// Joy-Con support
TestJoyConConfigurations();
// Sleep mode
TestSleepModeRecovery();
// User switching
TestUserAccountSwitching();
}
void TestDockedHandheldTransition()
{
// Simular transição
for (int i = 0; i < 100; i++)
{
SimulateDock();
System.Threading.Thread.Sleep(1000);
SimulateUndock();
System.Threading.Thread.Sleep(1000);
// Verificar que não travou
Assert.IsTrue(GameIsResponsive());
}
}
}
}
}
Testes Mobile
// Framework de testes para mobile
class MobileTesting {
constructor() {
this.devices = this.loadDeviceProfiles();
this.testSuites = [];
}
loadDeviceProfiles() {
return {
ios: [
{ model: "iPhone 15 Pro", os: "iOS 17", screen: "6.1", ram: 8 },
{ model: "iPhone 12", os: "iOS 16", screen: "6.1", ram: 4 },
{ model: "iPhone SE", os: "iOS 15", screen: "4.7", ram: 3 },
{ model: "iPad Pro", os: "iPadOS 17", screen: "12.9", ram: 16 },
{ model: "iPad Mini", os: "iPadOS 16", screen: "8.3", ram: 4 }
],
android: [
{ model: "Samsung S24", os: "Android 14", screen: "6.8", ram: 12 },
{ model: "Pixel 8", os: "Android 14", screen: "6.2", ram: 8 },
{ model: "OnePlus 11", os: "Android 13", screen: "6.7", ram: 8 },
{ model: "Xiaomi 13", os: "Android 13", screen: "6.36", ram: 8 },
{ model: "Budget Device", os: "Android 10", screen: "5.5", ram: 2 }
]
};
}
createMobileTestSuite() {
return {
performance: [
{
test: "FPS on minimum spec device",
target: 30,
critical: true
},
{
test: "Loading time",
target: "< 30 seconds",
critical: true
},
{
test: "Memory usage",
target: "< 1GB",
critical: false
},
{
test: "Battery drain",
target: "< 15% per hour",
critical: false
},
{
test: "Thermal throttling",
target: "No throttling in 30 min",
critical: false
}
],
compatibility: [
{
test: "Touch controls responsiveness",
validation: "< 100ms latency"
},
{
test: "Screen orientation changes",
validation: "Smooth transition"
},
{
test: "Interruption handling",
scenarios: ["Phone call", "Notification", "App switch"]
},
{
test: "Different screen sizes",
validation: "UI scales properly"
},
{
test: "Notch/cutout support",
validation: "Content not obscured"
}
],
monetization: [
{
test: "IAP flow",
validation: "Purchase completes"
},
{
test: "Ad integration",
validation: "Ads load and display"
},
{
test: "Restore purchases",
validation: "Previous purchases restored"
}
]
};
}
async runDeviceTests(device) {
const results = {
device: device,
tests: [],
passed: 0,
failed: 0
};
// Performance tests
const fps = await this.measureFPS(device);
results.tests.push({
name: "FPS Test",
result: fps >= 30 ? "PASS" : "FAIL",
value: fps
});
// Memory test
const memory = await this.measureMemoryUsage(device);
results.tests.push({
name: "Memory Test",
result: memory < 1024 ? "PASS" : "FAIL",
value: `${memory}MB`
});
// Battery test
const batteryDrain = await this.measureBatteryDrain(device);
results.tests.push({
name: "Battery Test",
result: batteryDrain < 15 ? "PASS" : "FAIL",
value: `${batteryDrain}% per hour`
});
return results;
}
async measureFPS(device) {
// Simular medição de FPS
const baselineFPS = {
high_end: 60,
mid_range: 45,
low_end: 30
};
const deviceCategory = this.categorizeDevice(device);
return baselineFPS[deviceCategory] + Math.random() * 10 - 5;
}
categorizeDevice(device) {
if (device.ram >= 8) return 'high_end';
if (device.ram >= 4) return 'mid_range';
return 'low_end';
}
}
Planejamento de Testes
Test Plan Template
# Template de plano de testes completo
class TestPlanTemplate:
def __init__(self, project_name, version):
self.project_name = project_name
self.version = version
self.test_plan = self.create_test_plan()
def create_test_plan(self):
return {
"1_introduction": {
"purpose": "Define estratégia de testes para garantir qualidade",
"scope": "Todos os aspectos do jogo incluindo gameplay, performance, compatibilidade",
"objectives": [
"Identificar e corrigir bugs críticos",
"Garantir performance aceitável",
"Validar gameplay e balanceamento",
"Certificar para plataformas target"
]
},
"2_test_strategy": {
"approach": "Combinação de testes manuais e automatizados",
"levels": [
"Unit Testing - Componentes individuais",
"Integration Testing - Sistemas integrados",
"System Testing - Jogo completo",
"Acceptance Testing - Validação final"
],
"types": [
"Functional",
"Performance",
"Compatibility",
"Usability",
"Security",
"Localization"
]
},
"3_test_criteria": {
"entry_criteria": [
"Build compilando sem erros",
"Features principais implementadas",
"Ambiente de teste preparado"
],
"exit_criteria": [
"Zero bugs críticos",
"< 5 bugs major",
"Performance targets atingidos",
"Certificação aprovada"
],
"suspension_criteria": [
"> 30% dos testes bloqueados",
"Build não inicia",
"Crash no menu principal"
]
},
"4_resource_planning": {
"human_resources": [
{"role": "QA Lead", "count": 1, "responsibility": "Coordenação e estratégia"},
{"role": "Senior Tester", "count": 2, "responsibility": "Testes complexos e mentoria"},
{"role": "Tester", "count": 5, "responsibility": "Execução de testes"},
{"role": "Automation Engineer", "count": 1, "responsibility": "Automação"}
],
"hardware": [
"PCs com diferentes specs",
"Consoles de desenvolvimento",
"Dispositivos mobile variados",
"Equipamento de captura"
],
"software": [
"Bug tracking system",
"Test management tool",
"Performance profilers",
"Automation frameworks"
]
},
"5_schedule": {
"phases": [
{
"phase": "Alpha Testing",
"duration": "4 weeks",
"focus": "Core functionality",
"team_size": 3
},
{
"phase": "Beta Testing",
"duration": "6 weeks",
"focus": "Full game testing",
"team_size": 8
},
{
"phase": "Release Candidate",
"duration": "2 weeks",
"focus": "Regression and certification",
"team_size": 5
},
{
"phase": "Post-Launch",
"duration": "Ongoing",
"focus": "Live issues and patches",
"team_size": 2
}
]
},
"6_test_deliverables": [
"Test Plan Document",
"Test Cases",
"Test Scripts",
"Bug Reports",
"Test Execution Reports",
"Test Summary Report",
"Certification Reports"
],
"7_risk_assessment": {
"risks": [
{
"risk": "Timeline pressão",
"probability": "High",
"impact": "High",
"mitigation": "Priorização agressiva e overtime planejado"
},
{
"risk": "Mudanças de escopo",
"probability": "Medium",
"impact": "High",
"mitigation": "Buffer de tempo e comunicação clara"
},
{
"risk": "Falta de testers",
"probability": "Low",
"impact": "Medium",
"mitigation": "Outsourcing preparado como backup"
}
]
},
"8_approval": {
"prepared_by": "QA Lead",
"reviewed_by": "Project Manager",
"approved_by": "Producer",
"date": datetime.now().isoformat()
}
}
def generate_test_schedule(self, launch_date):
"""Gerar cronograma reverso de testes"""
from datetime import datetime, timedelta
schedule = []
current_date = launch_date
phases = [
("Gold Master", 1),
("Certification", 2),
("Release Candidate", 2),
("Beta Testing", 6),
("Alpha Testing", 4),
("Pre-Alpha", 2)
]
for phase_name, duration_weeks in phases:
end_date = current_date
start_date = current_date - timedelta(weeks=duration_weeks)
schedule.append({
"phase": phase_name,
"start": start_date.strftime("%Y-%m-%d"),
"end": end_date.strftime("%Y-%m-%d"),
"duration": f"{duration_weeks} weeks",
"deliverables": self.get_phase_deliverables(phase_name)
})
current_date = start_date
return list(reversed(schedule))
def get_phase_deliverables(self, phase):
deliverables = {
"Pre-Alpha": ["Initial test plan", "Test environment setup"],
"Alpha Testing": ["Core functionality validation", "Major bug identification"],
"Beta Testing": ["Full test coverage", "Performance benchmarks"],
"Release Candidate": ["Regression test results", "Go/No-go recommendation"],
"Certification": ["Platform compliance reports"],
"Gold Master": ["Final sign-off", "Day-one patch plan"]
}
return deliverables.get(phase, [])
Recursos e Ferramentas
Ferramentas de Testing
- Unity Test Framework: Testes para Unity
- Unreal Automation: Sistema de testes Unreal
- Selenium: Automação web
- Appium: Automação mobile
Bug Tracking Systems
- Jira: Padrão da indústria
- Bugzilla: Open source
- Mantis: Simples e eficaz
- GitHub Issues: Integrado com código
Performance Testing
- Unity Profiler: Análise de performance
- RenderDoc: Debug de GPU
- PerfView: Profiling .NET
- Chrome DevTools: Para jogos web
Device Testing
- BrowserStack: Testes em cloud
- AWS Device Farm: Dispositivos reais
- Firebase Test Lab: Testes Android
- TestFlight: Beta testing iOS
Conclusão
QA eficaz é a base de jogos bem-sucedidos. A combinação de metodologias estruturadas, automação inteligente, tracking detalhado e testes abrangentes garante que seu jogo atenda às expectativas dos jogadores. Investir em QA não é custo - é proteção do seu investimento e da sua reputação.
🐛 Domine Testing e QA para Jogos! Aprenda técnicas profissionais de garantia de qualidade. Teste vocacional gratuito →
Próximos Passos
Comece criando um plano de testes básico. Implemente testes automatizados para funcionalidades críticas. Estabeleça processo de bug tracking. Teste em múltiplos dispositivos. Lembre-se: é mais barato encontrar bugs durante desenvolvimento do que após o lançamento.
🎮 Curso Completo de QA para Games! Torne-se especialista em qualidade de jogos. Inscreva-se agora →


